It is not every day that you achieve the status of “he-who-must-not-be-named.” But that curious distinction has been bestowed upon me by OpenAI’s ChatGPT, according to the New York Times, Wall Street Journal, and other publications.
For more than a year, people who tried to research my name online using ChatGPT were met with an immediate error warning.
It turns out that I am among a small group of individuals who have been effectively disappeared by the AI system. How we came to this Voldemortian status is a chilling tale about not just the rapidly expanding role of artificial intelligence, but the power of companies like OpenAI.
Joining me in this dubious distinction are Harvard Professor Jonathan Zittrain, CNBC anchor David Faber, Australian mayor Brian Hood, English professor David Mayer, and a few others.
The common thread appears to be the false stories generated about us all by ChatGPT in the past. The company appears to have corrected the problem not by erasing the error but erasing the individuals in question.
Thus far, the ghosting is limited to ChatGPT sites, but the controversy highlights a novel political and legal question in the brave new world of AI.
My path toward cyber-erasure began with a bizarre and entirely fabricated account by ChatGPT. As I wrote at the time, ChatGPT falsely reported that there had been a claim of sexual harassment against me (which there never was) based on something that supposedly happened on a 2018 trip with law students to Alaska (which never occurred), while I was on the faculty of Georgetown Law (where I have never taught).
In support of its false and defamatory claim, ChatGPT cited a Washington Post article that had never been written and quoted from a statement that had never been issued by the newspaper. The Washington Post investigated the false story and discovered that another AI program, “Microsoft’s Bing, which is powered by GPT-4, repeated the false claim about Turley.”
Although some of those defamed in this manner chose to sue these companies for defamatory AI reports, I did not. I assumed that the company, which has never reached out to me, would correct the problem.
And it did, in a manner of speaking — apparently by digitally erasing me, at least to some extent. In some algorithmic universe, the logic is simple: there is no false story if there is no discussion of the individual.
As with Voldemort, even death is no guarantee of closure. Professor Mayer was a respected Emeritus Professor of Drama and Honorary Research Professor at the University of Manchester, who passed away last year. And ChatGPT reportedly will still not utter his name.
Before his death, his name was used by a Chechen rebel on a terror watch list. The result was a snowballing association of the professor, who found himself facing travel and communication restrictions.
Hood, the Australian mayor, was so frustrated with a false AI-generated narrative that he had been arrested for bribery that he took legal action against OpenAI. That may have contributed to his own erasure.