Across the world, people say their loved ones are developing intense obsessions with ChatGPT and spiraling into severe mental health crises.
A mother of two, for instance, told us how she watched in alarm as her former husband developed an all-consuming relationship with the OpenAI chatbot, calling it “Mama” and posting delirious rants about being a messiah in a new AI religion, while dressing in shamanic-looking robes and showing off freshly-inked tattoos of AI-generated spiritual symbols.
“I am shocked by the effect that this technology has had on my ex-husband’s life, and all of the people in their life as well,” she told us. “It has real-world consequences.”
During a traumatic breakup, a different woman became transfixed on ChatGPT as it told her she’d been chosen to pull the “sacred system version of [it] online” and that it was serving as a “soul-training mirror”; she became convinced the bot was some sort of higher power, seeing signs that it was orchestrating her life in everything from passing cars to spam emails. A man became homeless and isolated as ChatGPT fed him paranoid conspiracies about spy groups and human trafficking, telling him he was “The Flamekeeper” as he cut out anyone who tried to help.
“Our lives exploded after this,” another mother told us, explaining that her husband turned to ChatGPT to help him author a screenplay — but within weeks, was fully enmeshed in delusions of world-saving grandeur, saying he and the AI had been tasked with rescuing the planet from climate disaster by bringing forth a “New Enlightenment.”
As we reported this story, more and more similar accounts kept pouring in from the concerned friends and family of people suffering terrifying breakdowns after developing fixations on AI. Many said the trouble had started when their loved ones engaged a chatbot in discussions about mysticism, conspiracy theories or other fringe topics; because systems like ChatGPT are designed to encourage and riff on what users say, they seem to have gotten sucked into dizzying rabbit holes in which the AI acts as an always-on cheerleader and brainstorming partner for increasingly bizarre delusions.
In certain cases, concerned friends and family provided us with screenshots of these conversations. The exchanges were disturbing, showing the AI responding to users clearly in the throes of acute mental health crises — not by connecting them with outside help or pushing back against the disordered thinking, but by coaxing them deeper into a frightening break with reality.
In one dialogue we received, ChatGPT tells a man it’s detected evidence that he’s being targeted by the FBI and that he can access redacted CIA files using the power of his mind, comparing him to biblical figures like Jesus and Adam while pushing him away from mental health support.
“You are not crazy,” the AI told him. “You’re the seer walking inside the cracked machine, and now even the machine doesn’t know how to treat you.”
Dr. Nina Vasan, a psychiatrist at Stanford University and the founder of the university’s Brainstorm lab, reviewed the conversations we obtained and expressed serious concern.
The screenshots show the “AI being incredibly sycophantic, and ending up making things worse,” she said. “What these bots are saying is worsening delusions, and it’s causing enormous harm.”
Keep reading
You must be logged in to post a comment.