A research paper, authored by Microsoft, OpenAI, and a host of influential universities, proposes developing “personhood credentials” (PHCs).
It’s notable for the fact that the same companies that are developing and selling potentially “deceptive” AI models are now coming up with a fairly drastic “solution,” a form of digital ID.
The goal would be to prevent deception by identifying people creating content on the internet as “real” – as opposed to that generated by AI. And, the paper freely admits that privacy is not included.
Instead, there’s talk of “cryptographic authentication” that is also described as “pseudonymous” as PHCs are not supposed to publicly identify a person – unless, that is, the demand comes from law enforcement.
“Although PHCs prevent linking the credential across services, users should understand that their other online activities can still be tracked and potentially de-anonymized through existing methods,” said the paper’s authors.
Here we arrive at what could be the gist of the story – come up with workable digital ID available to the government, while on the surface preserving anonymity. And wrap it all in a package supposedly righting the very wrongs Microsoft and co. are creating through their lucrative “AI” products.
The paper treats online anonymity as the key “weapon” used by bad actors engaging in deceptive behavior. Microsoft product manager Shrey Jain suggested during an interview that while this was in the past acceptable for the sake of privacy and access to information – times have changed.
The reason is AI – or rather, AI panic, thriving these days well before the world ever gets to experience and deal with, true AI (AGI). But it’s good enough for the likes of Microsoft, OpenAI, and over 30 others (including Harvard, Oxford, MIT…) to suggest PHCs.
Keep reading
You must be logged in to post a comment.