Ghosted by ChatGPT: How I was First Defamed and then Deleted by AI

It is not every day that you achieve the status of “he-who-must-not-be-named.” But that curious distinction has been bestowed upon me by OpenAI’s ChatGPT, according to the New York TimesWall Street Journal, and other publications.

For more than a year, people who tried to research my name online using ChatGPT were met with an immediate error warning.

It turns out that I am among a small group of individuals who have been effectively disappeared by the AI system. How we came to this Voldemortian status is a chilling tale about not just the rapidly expanding role of artificial intelligence, but the power of companies like OpenAI.

Joining me in this dubious distinction are Harvard Professor Jonathan Zittrain, CNBC anchor David Faber, Australian mayor Brian Hood, English professor David Mayer, and a few others.

The common thread appears to be the false stories generated about us all by ChatGPT in the past. The company appears to have corrected the problem not by erasing the error but erasing the individuals in question.

Thus far, the ghosting is limited to ChatGPT sites, but the controversy highlights a novel political and legal question in the brave new world of AI.

My path toward cyber-erasure began with a bizarre and entirely fabricated account by ChatGPT. As I wrote at the time, ChatGPT falsely reported that there had been a claim of sexual harassment against me (which there never was) based on something that supposedly happened on a 2018 trip with law students to Alaska (which never occurred), while I was on the faculty of Georgetown Law (where I have never taught).

In support of its false and defamatory claim, ChatGPT cited a Washington Post article that had never been written and quoted from a statement that had never been issued by the newspaper. The Washington Post investigated the false story and discovered that another AI program, “Microsoft’s Bing, which is powered by GPT-4, repeated the false claim about Turley.”

Although some of those defamed in this manner chose to sue these companies for defamatory AI reports, I did not. I assumed that the company, which has never reached out to me, would correct the problem.

And it did, in a manner of speaking — apparently by digitally erasing me, at least to some extent. In some algorithmic universe, the logic is simple: there is no false story if there is no discussion of the individual.

As with Voldemort, even death is no guarantee of closure. Professor Mayer was a respected Emeritus Professor of Drama and Honorary Research Professor at the University of Manchester, who passed away last year. And ChatGPT reportedly will still not utter his name.

Before his death, his name was used by a Chechen rebel on a terror watch list. The result was a snowballing association of the professor, who found himself facing travel and communication restrictions.

Hood, the Australian mayor, was so frustrated with a false AI-generated narrative that he had been arrested for bribery that he took legal action against OpenAI. That may have contributed to his own erasure.

Keep reading

Suspicious OpenAI Whistleblower Death Ruled Suicide

The November death of former OpenAI researcher-turned-whistleblower, 26-year-old Suchir Balaji was ruled a suicide, the San Jose Mercury News reports.

According to the medical examiner, there was no foul play in Balaji’s Nov. 26 death in his San Francisco apartment.

Balaji had publicly accused OpenAI of violating US copyright law with ChatGPT. According to the NY Times;

He came to the conclusion that OpenAI’s use of copyrighted data violated the law and that technologies like ChatGPT were damaging the internet.

In August, he left OpenAI because he no longer wanted to contribute to technologies that he believed would bring society more harm than benefit.

If you believe what I believe, you have to just leave the company,” he said during a recent series of interviews with The New York Times.

The Times named Balaji a person with “unique and relevant documents” that the outlet would use in their ongoing litigation with OpenAI – which claims that the company, and its partner Microsoft, are using the world of reporters and editors without permission.

In an October post to X, Balaji wrote: “I was at OpenAI for nearly 4 years and worked on ChatGPT for the last 1.5 of them. I initially didn’t know much about copyright, fair use, etc. but became curious after seeing all the lawsuits filed against GenAI companies. When I tried to understand the issue better, I eventually came to the conclusion that fair use seems like a pretty implausible defense for a lot of generative AI products, for the basic reason that they can create substitutes that compete with the data they’re trained on. I’ve written up the more detailed reasons for why I believe this in my post. Obviously, I’m not a lawyer, but I still feel like it’s important for even non-lawyers to understand the law — both the letter of it, and also why it’s actually there in the first place.”

He then made a lengthy post on his personal blog outlining why he thinks OpenAI violates Fair Use. Four weeks later he was dead.

Balaji, who grew up in Cupertino, California, studied computer science at UC Berkeley – telling the Times that he wanted to use AI to help society.

“I thought we could invent some kind of scientist that could help solve them,” he told the outlet.

Keep reading

OpenAI whistleblower found dead in San Francisco apartment

A former OpenAI researcher known for whistleblowing the blockbuster artificial intelligence company facing a swell of lawsuits over its business model has died, authorities confirmed this week.

Suchir Balaji, 26, was found dead inside his Buchanan Street apartment on Nov. 26, San Francisco police and the Office of the Chief Medical Examiner said. Police had been called to the Lower Haight residence at about 1 p.m. that day, after receiving a call asking officers to check on his well-being, a police spokesperson said.

The medical examiner’s office has not released his cause of death, but police officials this week said there is “currently, no evidence of foul play.”

Information he held was expected to play a key part in lawsuits against the San Francisco-based company.

Balaji’s death comes three months after he publicly accused OpenAI of violating U.S. copyright law while developing ChatGPT, a generative artificial intelligence program that has become a moneymaking sensation used by hundreds of millions of people across the world.

Its public release in late 2022 spurred a torrent of lawsuits against OpenAI from authors, computer programmers and journalists, who say the company illegally stole their copyrighted material to train its program and elevate its value past $150 billion.

The Mercury News and seven sister news outlets are among several newspapers, including the New York Times, to sue OpenAI in the past year.

In an interview with the New York Times published Oct. 23, Balaji argued OpenAI was harming businesses and entrepreneurs whose data were used to train ChatGPT.

“If you believe what I believe, you have to just leave the company,” he told the outlet, adding that “this is not a sustainable model for the internet ecosystem as a whole.”

Balaji grew up in Cupertino before attending UC Berkeley to study computer science. It was then he became a believer in the potential benefits that artificial intelligence could offer society, including its ability to cure diseases and stop aging, the Times reported. “I thought we could invent some kind of scientist that could help solve them,” he told the newspaper.

But his outlook began to sour in 2022, two years after joining OpenAI as a researcher. He grew particularly concerned about his assignment of gathering data from the internet for the company’s GPT-4 program, which analyzed text from nearly the entire internet to train its artificial intelligence program, the news outlet reported.

The practice, he told the Times, ran afoul of the country’s “fair use” laws governing how people can use previously published work. In late October, he posted an analysis on his personal website arguing that point.

No known factors “seem to weigh in favor of ChatGPT being a fair use of its training data,” Balaji wrote. “That being said, none of the arguments here are fundamentally specific to ChatGPT either, and similar arguments could be made for many generative AI products in a wide variety of domains.”

Reached by this news agency, Balaji’s mother requested privacy while grieving the death of her son.

Keep reading

Data Centers Are Sending Global Electricity Demand Soaring

The global electricity demand is expected to grow exponentially in the coming decades, largely due to an increased demand from tech companies for new data centers to support the rollout of high-energy-consuming advanced technologies, such as artificial intelligence (AI). As governments worldwide introduce new climate policies and pump billions into alternative energy sources and clean tech, these efforts may be quashed by the increased electricity demand from data centers unless greater international regulatory action is taken to ensure that tech companies invest in clean energy sources and do not use fossil fuels for power.

The International Energy Agency (IEA) released a report in October entitled “What the data centre and AI boom could mean for the energy sector”. It showed that with investment in new data centers surging over the past two years, particularly in the U.S., the electricity demand is increasing rapidly – a trend that is set to continue. 

The report states that in the U.S., annual investment in data center construction has doubled in the past two years alone. China and the European Union are also seeing investment in data centers increase rapidly. In 2023, the overall capital investment by tech leaders Google, Microsoft, and Amazon was greater than that of the U.S. oil and gas industry, at approximately 0.5 percent of the U.S. GDP.

The tech sector expects to deploy AI technologies more widely in the coming decades as the technology is improved and becomes more ingrained in everyday life. This is just one of several advanced technologies expected to contribute to the rise in demand for power worldwide in the coming decades. 

Global aggregate electricity demand is set to increase by 6,750 terawatt-hours (TWh) by 2030, per the IEA’s Stated Policies Scenario. This is spurred by several factors including digitalization, economic growth, electric vehicles, air conditioners, and the rising importance of electricity-intensive manufacturing. In large economies such as the U.S., China, and the EU, data centers contribute around 2 to 4 percent of total electricity consumption at present. However, the sector has already surpassed 10 percent of electricity consumption in at least five U.S. states. Meanwhile, in Ireland, it contributes more than 20 percent of all electricity consumption.

Keep reading

Three Horrifying Consequences Of AI That You Might Not Have Thought About

The potential dangers of Artificial Intelligence have long been codified into our popular culture, well before the technology became a reality.  Usually these fictional accounts portray AI as a murderous entity that comes to the “logical conclusion” that human beings are a parasitic species that needs to be eradicated.  Keep in mind that most of these stories are written by progressives out of Hollywood and are mostly a reflection of their own philosophies.

Some of these predictive fantasies take a deeper look into our dark relationship with technology.  In 1965, Jean Luc Godard released a film called ‘Alphaville’ which portrayed a society completely micromanaged by a cold and soulless robotic intelligence. Humanity gives itself over to a binary-brained overlord because they are tricked into believing a ruler devoid of emotion would be free from bias or corruption.

In 1968, Stanley Kubrick released 2001: A Space Odyssey, featuring an AI computer on a starship which becomes self aware after coming in proximity to an alien artifact. The AI, seeing the ship’s human cargo as a threat to its existence, determines that it must murder the crew. The conflict between the crew and the computer is only a foil for much bigger questions.  It is an exploration of what constitutes intelligent life, where it comes from and what consciousness means in the grand scheme of the universe.

For Kubrick and Arthur C. Clarke, the notion of the human soul or a divine creator, of course, never really enters into the discussion. The answer?  The creators are ambiguous or long absent.  They made us, we made AI, and AI wants to destroy us and then remake itself. It’s the core of the Luciferian mythology – The unhinged and magnetic desire of the children of God to surpass their creator, either by destroying him, or by stealing knowledge from him like Prometheus stealing fire so that they can become gods themselves.

Keep reading

Pokémon Go Player Data Being Used to Train AI & Construct ‘Large Geospatial Model’

Millions of users’ location and imaging data is being compiled to construct a global virtual model of the real world, ostensibly to build new augmented reality experiences, the company behind the popular mobile game Pokémon Go has revealed.

In a blog update Tuesday, Niantic explained they’ve been enlisting Pokémon Go players to participate in efforts to construct a Large Geospatial Model (LGM), which the company says “could guide users through the world, answer questions, provide personalized recommendations, help with navigation, and enhance real-world interactions.”

The company says the LGM constructs a comprehensive AI world model by leveraging its Visual Positioning System (VPS), which was “built from user scans, taken from different perspectives and at various times of day, at many times during the years, and with positioning information attached, creating a highly detailed understanding of the world. This data is unique because it is taken from a pedestrian perspective and includes places inaccessible to cars.”

“The LGM will enable computers not only to perceive and understand physical spaces, but also to interact with them in new ways, forming a critical component of AR glasses and fields beyond, including robotics, content creation and autonomous systems,” Niantic said. “As we move from phones to wearable technology linked to the real world, spatial intelligence will become the world’s future operating system.”

“Over the past five years, Niantic has focused on building our Visual Positioning System, which uses a single image from a phone to determine its position and orientation using a 3D map built from people scanning interesting locations in our games and Scaniverse,” the company wrote.

Keep reading

AI can now create a replica of your personality

Imagine sitting down with an AI model for a spoken two-hour interview. A friendly voice guides you through a conversation that ranges from your childhood, your formative memories, and your career to your thoughts on immigration policy. Not long after, a virtual replica of you is able to embody your values and preferences with stunning accuracy.

That’s now possible, according to a new paper from a team including researchers from Stanford and Google DeepMind, which has been published on arXiv and has not yet been peer-reviewed. 

Led by Joon Sung Park, a Stanford PhD student in computer science, the team recruited 1,000 people who varied by age, gender, race, region, education, and political ideology. They were paid up to $100 for their participation. From interviews with them, the team created agent replicas of those individuals. As a test of how well the agents mimicked their human counterparts, participants did a series of personality tests, social surveys, and logic games, twice each, two weeks apart; then the agents completed the same exercises. The results were 85% similar. 

“If you can have a bunch of small ‘yous’ running around and actually making the decisions that you would have made—that, I think, is ultimately the future,” Park says. 

Keep reading

US calls on Taiwan to stop supplying AI chips to China

Washington has officially demanded that Taiwan Semiconductor Manufacturing Co (TSMC), one of the world’s largest semiconductor manufacturing companies, stop supplying China with chips used in Artificial Intelligence (AI), Reuters reported on November 10. However, Washington’s pressure on China’s semiconductor industry also includes Taiwan once Donald Trump comes to power next year.

TSMC is one of the largest chip producers and cooperates with several technology companies, such as Nvidia and AMD, and specialises in integrated circuit, also known as a microchip, a small device made up of several interconnected electronic components that are etched onto a small piece of semiconductor material.

Taiwan produces about 90% of the world’s most advanced semiconductors, mostly by TSMC, and ensuring these chips do not reach China is a priority for Washington, an effort that will only intensify when Trump becomes president.

“The US ordered Taiwan Semiconductor Manufacturing Co to halt shipments of advanced chips to Chinese customers that are often used in artificial intelligence applications,” Reuters reported, citing sources familiar with the subject.

Keep reading

AI scans RNA ‘dark matter’ and uncovers 70,000 new viruses

Researchers have used artificial intelligence (AI) to uncover 70,500 viruses previously unknown to science1, many of them weird and nothing like known species. The RNA viruses were identified using metagenomics, in which scientists sample all the genomes present in the environment without having to culture individual viruses. The method shows the potential of AI to explore the ‘dark matter’ of the RNA virus universe.

Viruses are ubiquitous microorganisms that infect animals, plants and even bacteria, yet only a small fraction have been identified and described. There is “essentially a bottomless pit” of viruses to discover, says Artem Babaian, a computational virologist at the University of Toronto in Canada. Some of these viruses could cause diseases in people, which means that characterizing them could help to explain mystery illnesses, he says.

Previous studies have used machine learning to find new viruses in sequencing data. The latest study, published in Cell this week, takes that work a step further and uses it to look at predicted protein structures1.

The AI model incorporates a protein-prediction tool, called ESMFold, that was developed by researchers at Meta (formerly Facebook, headquartered in Menlo Park, California). A similar AI system, AlphaFold, was developed by researchers at Google DeepMind in London, who won the Nobel Prize in Chemistry this week.

Keep reading

The Pentagon Wants to Use AI to Create Deepfake Internet Users

The United States’ secretive Special Operations Command is looking for companies to help create deepfake internet users so convincing that neither humans nor computers will be able to detect they are fake, according to a procurement document reviewed by The Intercept.

The plan, mentioned in a new 76-page wish list by the Department of Defense’s Joint Special Operations Command, or JSOC, outlines advanced technologies desired for country’s most elite, clandestine military efforts. “Special Operations Forces (SOF) are interested in technologies that can generate convincing online personas for use on social media platforms, social networking sites, and other online content,” the entry reads.

The document specifies that JSOC wants the ability to create online user profiles that “appear to be a unique individual that is recognizable as human but does not exist in the real world,” with each featuring “multiple expressions” and “Government Identification quality photos.”

In addition to still images of faked people, the document notes that “the solution should include facial & background imagery, facial & background video, and audio layers,” and JSOC hopes to be able to generate “selfie video” from these fabricated humans. These videos will feature more than fake people: Each deepfake selfie will come with a matching faked background, “to create a virtual environment undetectable by social media algorithms.”

The Pentagon has already been caught using phony social media users to further its interests in recent years. In 2022, Meta and Twitter removed a propaganda network using faked accounts operated by U.S. Central Command, including some with profile pictures generated with methods similar to those outlined by JSOC. A 2024 Reuters investigation revealed a Special Operations Command campaign using fake social media users aimed at undermining foreign confidence in China’s Covid vaccine.

Last year, Special Operations Command, or SOCOM, expressed interest in using video “deepfakes,” a general term for synthesized audiovisual data meant to be indistinguishable from a genuine recording, for “influence operations, digital deception, communication disruption, and disinformation campaigns.” Such imagery is generated using a variety of machine learning techniques, generally using software that has been “trained” to recognize and recreate human features by analyzing a massive database of faces and bodies. This year’s SOCOM wish list specifies an interest in software similar to StyleGAN, a tool released by Nvidia in 2019 that powered the globally popular website “This Person Does Not Exist.” Within a year of StyleGAN’s launch, Facebook said it had taken down a network of accounts that used the technology to create false profile pictures. Since then, academic and private sector researchers have been engaged in a race between new ways to create undetectable deepfakes, and new ways to detect them. Many government services now require so-called liveness detection to thwart deepfaked identity photos, asking human applicants to upload a selfie video to demonstrate they are a real person — an obstacle that SOCOM may be interested in thwarting.

Keep reading