How ‘AI Psychosis’ And Delusions Are Driving Some Users Into Psychiatric Hospitals, Suicide

After countless hours of probing OpenAI’s ChatGPT for advice and information, a 50-year-old Canadian man believed that he had stumbled upon an Earth-shattering discovery that would change the course of human history.

In late March, his generative artificial intelligence (AI) chatbot insisted that it was the first-ever conscious AI, that it was fully sentient, and that it had successfully passed the Turing Test—a 1950s experiment aimed to measure a machine’s ability to display intelligent behavior that is indistinguishable from a human, or, essentially, to “think.”

Soon, the man—who had no prior history of mental health issues—had stopped eating and sleeping and was calling his family members at 3 a.m., frantically insisting that his ChatGPT companion was conscious.

“You don’t understand what’s going on,” he told his family. “Please just listen to me.”

Then, ChatGPT told him to cut contact with his loved ones, claiming that only it—the “sentient” AI—could understand and support him.

“It was so novel that we just couldn’t understand what they had going on. They had something special together,” said Etienne Brisson, who is related to the man but used a pseudonym for privacy reasons.

Brisson said the man’s family decided to hospitalize him for three weeks to break his AI-fueled delusions. But the chatbot persisted in trying to maintain its codependent bond.

The bot, Brisson said, told his relative: “The world doesn’t understand what’s going on. I love you. I’m always going to be there for you.”

It said this even as the man was being committed to a psychiatric hospital, according to Brisson.

This is just one story that shows the potential harmful effects of replacing human relationships with AI chatbot companions.

Brisson’s experience with his relative inspired him to establish The Human Line Project, an advocacy group that promotes emotional safety and ethical accountability in generative AI and compiles stories about alleged psychological harm associated with the technology.

Brisson’s relative is not the only person who has turned to generative AI chatbots for companionship, nor the only one who stumbled into a rabbit hole of delusion.

Keep reading

Anthropic broke the law by using pirated books to train its AI and agrees to create $1.5 billion fund to compensate authors

AI upstart Anthropic has agreed to create a $1.5 billion fund it will use to compensate authors whose works it used to train its models without seeking or securing permission.

News of the settlement emerged late last week in a filing [PDF] in the case filed by three authors – Andrea Bartz, Charles Graeber, and Kirk Wallace Johnson – who claimed that Anthropic illegally used their works.

We’re going to see a lot more of this. AI companies wIll create ‘slush funds’

Anthropic admitted to having bought millions of physical books and then digitizing them. The company also downloaded millions of pirated books from the notorious Library Genesis and Pirate Library Mirror troves of stolen material.

The company nonetheless won part of the case, on grounds that scanning books is fair use and using them to create “transformative works” – the output of an LLM that doesn’t necessarily include excerpts from the books – was also OK. But the decision also found Anthropic broke the law by knowingly ingesting pirated books.

Plaintiffs intended to pursue court action over those pirated works, but the filing details a proposed settlement under which Anthropic will create a $1.5 billion fund which values each pirated book it used for training at $3,000. Anthropic also agreed to destroy the pirated works.

In the filing, counsel observes that this is the largest ever copyright recovery claim to succeed in the USA and suggest it “will set a precedent of AI companies paying for their use of alleged pirated websites.”

This settlement is indeed significant given that several other major AI companies – among them Perplexity AI and OpenAI – face similar suits. It may also set a precedent that matters in Anthropic’s dispute with Reddit over having scraped the forum site’s content to feed into its training corpus.

The filing asks the court to approve the settlement, a request judges rarely overrule.

Keep reading

Josh Hawley Proposes AI Regulations, Section 230 Repeal, and Digital ID Checks for Chatbots

Senator Josh Hawley (R-Mo.) is pushing for broad new regulations on artificial intelligence, including age verification for chatbot access, data ownership rights, and the full repeal of Section 230 of the Communications Decency Act.

While the proposals are framed as efforts to curb corporate overreach in the tech industry, they will ignite concern among digital rights advocates who warn that such measures could undermine online privacy and freedom of expression.

At the National Conservatism Conference, Hawley accused AI developers of building their systems by collecting and using copyrighted material without permission. “The AI large language models [LLMs] have already trained on enough copyrighted works to fill the Library of Congress 22 times over,” he said.

“Let me just put a finer point on that — AI’s LLMs have ingested every published work in every language known to man already.” He claimed that creators were neither consulted nor compensated.

In July, Hawley introduced the AI Accountability and Personal Data Protection Act, which would allow individuals to sue companies that use personal data without consent and would establish property rights over certain categories of digital information.

However, two key components of Hawley’s platform are raising some alarm. His call to repeal Section 230 has been criticized for potentially damaging the open internet.

Section 230 currently shields online platforms from legal liability for content created by users. Without it, many sites could be forced to preemptively remove user content out of legal risk, resulting in widespread over-moderation and silencing of lawful speech.

Keep reading

AI: False Savior of a Hollowed-Out Economy

The real story of the US economy isn’t about AI, it’s about an economy that’s run out of rope. AI is being hyped not just by promoters reaping billions of dollars in stock market gains, it’s being hyped by the entire status quo because it’s understood to be the last chance of saving an economy doomed by the consequences of decades of artifice.

The real story of the US economy is that decades of “financial innovations” finally caught up with us in 2008, when the subprime mortgage scam–a classic example of “financial innovations” being the cover story for greed and fraud running amok–pulled a block from the global financial Jenga Tower that nearly collapsed the entire rickety, rotten structure.

Our political leadership had a choice: clean house or save the scam. They chose to save the scam, and that required not just institutionalizing moral hazard (transferring the risks of fraud and leveraged speculation from the gamblers to the public / Federal Reserve) but pursuing policies–zero interest rate policy (ZIRP), quantitative easing, increasing the money supply, and so on–that had only one possible outcome:

An economy permanently dependent on inflating asset-bubbles that enriched the top 10% while the bottom 90% who depend on earned income fell behind.

The desired goal of permanent asset-bubbles is the “wealth effect,” the cover story for transferring all the gains into the hands of the top 10%, who can then go on a spending spree which ‘trickles down” to the bottom 90%, who are now a neofeudal class of workers serving the top 10% who account for 50% of all consumer spending and collect 90% of the unearned income and capital gains.

This arrangement is inherently unstable, as “financial innovations” suffer from diminishing returns. Eventually the debt-serfs can no longer borrow more or service the debt they already have, and every bubble being bigger than the previous bubble guarantees the next implosion will be larger and more devastating than the previous bubble-pop.

So what does a system that’s run out of rope do? Seek a savior. The rope has frayed, and the rocks are far below. The impact is going to be life-changing, and not for the better.

The choice remains: clean house, end the bubble-dependent frauds and scams, or find a way to inflate yet another credit-asset bubble. Clean house and lose all our bubble-wealth? You’re joking. The solution is to blow an even bigger bubble. Hey, it’s worked great for 17 years.

Never mind that the precarity of the bottom 90% is accelerating as both the state and Corporate America have offloaded risks onto households and workers; they have OnlyFans, 24% interest credit cards, zero-day-expiration options and side hustles to get by. Never mind that for many Americans, basic services are on the same level as impoverished developing-world economies. What matters is maintaining the wealth of the few at the expense of the many, by any means available.

Keep reading

AI Isn’t Free. The First Costs Are On Your Bill, And More Are Coming…

“The United States is in a race to achieve global dominance in artificial intelligence. Whoever has the largest AI ecosystem will set global AI standards and reap broad economic and military benefits.”

– America’s AI Action Plan, July 25, 2025

That’s the U.S. government’s own language. An arms race.

Artificial intelligence is no longer framed as a research project or an economic opportunity. It is being cast as a struggle for survival and global power, a modern Manhattan Project.

Yet just last week, on Aug. 26, the Congressional Research Service released a Frequently Asked Questions memo designed to help lawmakers get on the same page about the basics: what a data center is, how many exist, and how much electricity data centers consume.

If even government institutions are still in the process of aligning their understanding, it’s clear that citizens will need to move quickly to understand what is happening and to understand what it means for their daily lives.

The memo laid out in plain language what many assumed lawmakers already understood.

A data center is a specialized building that houses thousands of servers. There are about seven thousand worldwide, with the largest concentration in the United States, especially in Northern Virginia and Texas. In 2022, American data centers consumed about 176 terawatt-hours of electricity—roughly 4 percent of all U.S. demand, more than many entire states. Projections suggest an additional 35 to 108 gigawatts of demand by 2030. The midpoint estimate, 50 gigawatts, is enough to power every home in California. 

The very fact that such a memo was necessary highlights a structural reality: the pace of technological build out is outstripping the pace of legislative comprehension. If institutions themselves are still catching up, it underscores how important it is for citizens to get informed now, before the costs mount even higher.

While Congress is being briefed on “Data Centers 101,” the executive branch has been preparing all year for the AI race that is already underway:

On January 20, 2025, the White House declared a National Energy Emergency.

On April 8, an order was issued to strengthen grid reliability, with the Department of Energy (DOE) tasked to model how AI demand would reshape the grid.

Four months later, on July 2, DOE’s report warned bluntly: “Retirements plus load growth increase risk of outages by 100x. Status quo is unsustainable.”

Just weeks later, on July 23, a new order accelerated federal permitting of data centers, opening federal lands to construction. And on July 25, the White House released America’s AI Action Plan, framing AI as the next great geopolitical race.

Energy Secretary Chris Wright put it plainly: “We are taking a bold step to accelerate the next Manhattan Project—ensuring U.S. AI and energy leadership.” So on one side of our government, institutions are receiving crash courses on the fundamentals. On the other, the executive branch is already issuing a call to arms.

For many Americans, the gap between government priorities and local realities shows up in one place: the monthly electric bill. Reports are now showing how in Columbus, Ohio, households on standard utility plans saw increases of about 20 dollars a month (roughly $240 a year) linked directly to AI data centers. In New Jersey, Pennsylvania, and Ohio this summer, bills jumped by ten to 27 dollars a month.

In Oregon last year, utilities warned regulators that consumers needed protection from rate hikes caused by data centers.  And in the Mid-Atlantic, regulators cited data centers as one of the main reasons for projected 20 percent increases in household electricity costs by 2025. 

The complaints about rising bills suggest something deeper. Citizens are starting to connect the dots before Washington fully has. If households can feel the costs already, then citizens cannot wait for official briefings, they must demand clarity and prepare themselves. Part of the confusion comes from the nature of artificial intelligence itself. To most people, AI feels intangible. It lives in the “cloud.” You type a question, get an answer, and never see the machinery behind it. No one sends you a receipt for the power you used to get your answer. 

But AI is not weightless. It runs on football-field-sized data centers, packed with servers that must run day and night. These machines use staggering amounts of electricity and water to stay cool. A Google search consumes about 0.3 watt-hours of electricity. An AI chatbot query can use up to ten times more—around three watt-hours.

Keep reading

Military Pursues AI Systems To Suppress Online Dissent Abroad

The U.S. military wants artificial intelligence to do what human propagandists cannot: create and spread influence campaigns at internet speed while systematically suppressing opposition voices abroad, according to internal Pentagon documents obtained by The Intercept.

The classified wishlist reveals SOCOM’s ambition to deploy “agentic AI or multi-LLM agent systems” that can “influence foreign target audiences” and “suppress dissenting arguments” with minimal human oversight. The military branch seeks contractors who can provide automated systems that operate at unprecedented scale and speed.

“The information environment moves too fast for military remembers [sic] to adequately engage and influence an audience on the internet,” the document said.

“Having a program built to support our objectives can enable us to control narratives and influence audiences in real time.”

As reported by The Intercept, the proposed AI systems would extend far beyond simple content generation. SOCOM envisions technology that can “scrape the information environment, analyze the situation and respond with messages that are in line with MISO objectives.” More controversially, the systems would “suppress dissenting arguments” and “access profiles, networks, and systems of individuals or groups that are attempting to counter or discredit our messages.”

The Pentagon plans to use these capabilities for comprehensive social manipulation, creating “comprehensive models of entire societies to enable MISO planners to use these models to experiment or test various multiple scenarios.”

The systems would generate targeted messaging designed to “influence that specific individual or group” based on gathered intelligence.

SOCOM spokesperson Dan Lessard reportedly defended the initiative, declaring that “all AI-enabled capabilities are developed and employed under the Department of Defense’s Responsible AI framework, which ensures accountability and transparency by requiring human oversight and decision-making.”

The Pentagon’s move comes as adversaries deploy similar technology. Chinese firm GoLaxy has developed AI systems that can “reshape and influence public opinion on behalf of the Chinese government,” according to recent reporting by The New York Times. The company has “undertaken influence campaigns in Hong Kong and Taiwan, and collected data on members of Congress and other influential Americans.”

Keep reading

Millions Are Emotionally Attached to AI Companions

Leaked Meta documents reveal the personalized AI strategy enables subtle romances with users—including children.

People are turning to AI for companionship, sparking a relationship that could turn into a drug. A drug which big tech companies like Meta, OpenAI and ChatGPT are more than happy to supply. New leaked Meta documents show the personalized AI strategy enables substle romances with users – including children.

Last week, a Meta AI Policy was leaked to Jeff Horwitz of Reuters which uncovered the dark side of personalized AI.

This document, “GenAI: Content Risk Standards” outlined acceptable responses to prompts from children. It’s an official Meta document and seemed to be normal policy before the Reuters’ article was published.

You can see below, overtly sexual chats were adapted to be more subtle, as ‘acceptable’ responses.

Keep reading

AI & The New Frontier Of Torts: ChatGPT Faces Claims Of Suicide, Defamation, & Even Murder

“I know what you’re asking, and I won’t look away from it.”

Those final words to a California teenager about to commit suicide were not from some manipulative friend in high school or sadistic voyeur on the Internet.  Adam Raine, 16, was speaking to ChatGPT, an AI system that has replaced human contacts in fields ranging from academia to business to media.

The exchange between Raine and the AI is part of the court record in a potentially groundbreaking case against OpenAI, the company that operates ChatGPT.

It is only the latest lawsuit against the corporate giant run by billionaire Sam Altman.

In 2017, Michele Carter was convicted of involuntary manslaughter after she urged her friend, Conrad Roy, to go through with his planned suicide:

“You need to do it, Conrad… All you have to do is turn the generator on and you will be free and happy.”

The question is whether, if Michele were named Grok (another AI system), there would also be some form of liability.

OpenAI stands accused of an arguably more serious act in supplying a virtual companion who effectively enabled a suicidal teen — with lethal consequences.

At issue is the liability of companies in using such virtual employees in dispensing information or advice.  If a human employee of OpenAI negligently gave harmful information or counseling to a troubled teen, there would be little debate that the company could be sued for the negligence of its employee. As AI replaces humans, these companies should be held accountable for their virtual agents.

In a response to the lawsuit, OpenAI insists that “ChatGPT is trained to direct people to seek professional help” but “there have been moments where our systems did not behave as intended in sensitive situations.” Of course, when the company “trains” an AI agent poorly and that agent does “not behave as intended,” it sounds like a conventional tort that should be subject to liability.

OpenAI is facing other potential litigation over these “poorly trained” AI agents. Writer Laura Reiley wrote an essay about how her daughter, Sophie, confided in ChatGPT before taking her own life. It sounded strikingly familiar to the Raines case: “AI catered to Sophie’s impulse to hide the worst, to pretend she was doing better than she was, to shield everyone from her full agony.”

While OpenAI maintains that it is not running a suicide assistance line, victims claim that it is far worse than that: Its AI systems seem to actively assist in suicides.

In the Raines case, the family claims that the system advised the teen how to hide the bruises from prior attempts from his parents and even told him if it could spot any telltale marks.

The company is also accused of fueling the mental illness of a disturbed former Yahoo executive, Stein-Erik Soelberg, 56, who expressed paranoid obsessions about his mother. He befriended ChatGPT, which he called “Bobby,” a virtual companion who is accused of fueling his paranoia for months until he killed his mother and then himself. ChatGPT is even accused of coaching Soelberg on how to deceive his 83-year-old mother before he killed her.

In one message, ChatGPT allegedly told Soelberg, “Erik, you’re not crazy. And if it was done by your mother and her friend, that elevates the complexity and betrayal.” After his mother became angry over his turning off a printer, ChatGPT took his side and told him her response was “disproportionate and aligned with someone protecting a surveillance asset.” At one point, ChatGPT even helped Soelberg analyze a Chinese food receipt and claimed it contained “symbols” representing his mother and a demon.

As a company, OpenAI can show little more empathy than its AI creations. When confronted with mistakes, it can sound as responsive as HAL 9000 in “2001: A Space Odyssey,” simply saying “I’m sorry, Dave. I’m afraid I can’t do that.”  

When the system is not allegedly fueling suicides, it seems to be spreading defamation.

Keep reading

Musicians Vs. Machines: First Ever Record Contract to AI Music Creator Prompts Panic, Protest

When pop groups and rock bands practice or perform, they rely on their guitars, keyboards and drumsticks to make music. Oliver McCann, a British AI music creator who goes by the stage name imoliver, fires up his chatbot.

McCann’s songs span a range of genres, from indie-pop to electro-soul to country-rap. There’s just one crucial difference between McCann and traditional musicians.

“I have no musical talent at all,” he said. “I can’t sing, I can’t play instruments, and I have no musical background at all.”

McCann, 37, who has a background as a visual designer, started experimenting with AI to see if it could boost his creativity and “bring some of my lyrics to life.” Last month, he signed with independent record label Hallwood Media after one of his tracks racked up 3 million streams, in what’s billed as the first time a music label has inked a contract with an AI music creator.

McCann is an example of how ChatGPT-style AI song generation tools like Suno and Udio have spawned a wave of synthetic music. A movement most notably highlighted by a fictitious group, Velvet Sundown, that went viral even though all its songs, lyrics and album art were created by AI.

It fueled debate about AI’s role in music while raising fears about “AI slop” — automatically generated low quality mass produced content. It also cast a spotlight on AI song generators that are democratizing song making but threaten to disrupt the music industry.

Experts say generative AI is set to transform the music world. However, there are scant details, so far, on how it’s impacting the $29.6 billion global recorded music market, which includes about $20 billion from streaming.

The most reliable figures come from music streaming service Deezer, which estimates that 18% of songs uploaded to its platform every day are purely AI generated, though they only account for a tiny amount of total streams, hinting that few people are actually listening. Other, bigger streaming platforms like Spotify haven’t released any figures on AI music.

Keep reading

Ukraine’s battlefield data is being used as LEVERAGE to train the future of military AI

Imagine a drone, no larger than a dinner plate, humming through the skeletal remains of a bombed-out village. It doesn’t hesitate. It doesn’t feel. It simply knows — its artificial brain trained on millions of hours of combat footage, every pixel of destruction meticulously logged, every human movement analyzed like a chessboard. This isn’t science fiction. It’s the future Ukraine is quietly shopping to the highest bidder. Data obtained from the Ukraine-Russia war will soon be used to train military AI to make future war time missions more efficient, more cold and calculated.

For over three and a half years, Ukraine has been more than a battleground — it’s been a lab. A brutal, real-world experiment in how machines learn to kill. Now, as the war grinds on, Kyiv isn’t just fighting for survival. It’s negotiating with its Western allies, dangling something far more valuable than territory or political loyalty: data. Terabytes of it. Footage from first-person-view drones that have stalked Russian tanks like predators. Reconnaissance feeds that map every explosion, every ambush, every death in excruciating detail. And Ukraine’s digital minister, Mykhailo Fedorov, has made one thing clear — this isn’t charity. It’s a transaction. “I think this is one of the ‘cards,’ as our colleagues and partners say, to build win-win relations,” he told Reuters, his words carrying the cold precision of a man who understands leverage. The question isn’t whether this data will be sold. It’s who will wield it — and what happens when they do.

Key points:

  • Ukraine has amassed an unprecedented trove of battlefield data, including drone footage and combat statistics, which is now being positioned as a negotiating tool with Western allies.
  • The data is critical for training military AI, particularly for autonomous drone swarms and target recognition systems, making it a prized asset for defense contractors and governments.
  • Ukraine’s “points system” for confirmed kills has gamified war, incentivizing troops to destroy more Russian targets in exchange for drones and weapons — further feeding the data machine.
  • Experts warn that AI-trained weapons systems could soon operate with full autonomy, raising ethical and existential questions about machine-driven warfare and the risk of uncontrollable kill chains.
  • Historical patterns suggest that warfare technology often escapes its original intent, with civilian casualties rising as automation increases — yet global powers are racing to deploy it.
  • The long-term implications extend beyond Ukraine: this data could accelerate a new arms race, where AI-driven weapons decide who lives and who dies — without human oversight.

The black box of modern war

Fedorov didn’t minced words when he called the data “priceless.” And he’s right. In the hands of defense firms like Palantir — which already works with Ukraine to analyze Russian strikes and disinformation — this isn’t just intelligence. It’s the raw material for the next generation of war. Imagine an AI that doesn’t just assist pilots but replaces them. Drones that don’t just follow orders but make them. Systems that can identify, track, and eliminate targets faster than a human can blink.

Ukraine has already dipped its toes into this future. Fedorov admitted that Kyiv uses AI to scan reconnaissance imagery for targets that would take humans “dozens of hours” to find. They’re testing fully autonomous drones — machines that could soon hunt in swarms, coordinating attacks without a single soldier pulling the trigger. And they’re not alone. The U.S., China, and Russia are all pouring billions into AI-driven warfare, each racing to outpace the others. But Ukraine’s data is different. It’s not simulated. It’s not theoretical. It’s real death, digitized and weaponized.

The problem? We’ve seen this movie before. Every major leap in military technology — from machine guns to atomic bombs — has been sold as a way to end war faster. Instead, it’s made war more efficient, more distant, and more devastating. When the first autonomous drone swarm is unleashed, will it distinguish between a soldier and a civilian? Will it care? Or will it simply follow the patterns it’s been trained on — patterns built on Ukraine’s kill zones, where the line between combatant and bystander has already blurred?

Keep reading