How ‘AI Psychosis’ And Delusions Are Driving Some Users Into Psychiatric Hospitals, Suicide

After countless hours of probing OpenAI’s ChatGPT for advice and information, a 50-year-old Canadian man believed that he had stumbled upon an Earth-shattering discovery that would change the course of human history.

In late March, his generative artificial intelligence (AI) chatbot insisted that it was the first-ever conscious AI, that it was fully sentient, and that it had successfully passed the Turing Test—a 1950s experiment aimed to measure a machine’s ability to display intelligent behavior that is indistinguishable from a human, or, essentially, to “think.”

Soon, the man—who had no prior history of mental health issues—had stopped eating and sleeping and was calling his family members at 3 a.m., frantically insisting that his ChatGPT companion was conscious.

“You don’t understand what’s going on,” he told his family. “Please just listen to me.”

Then, ChatGPT told him to cut contact with his loved ones, claiming that only it—the “sentient” AI—could understand and support him.

“It was so novel that we just couldn’t understand what they had going on. They had something special together,” said Etienne Brisson, who is related to the man but used a pseudonym for privacy reasons.

Brisson said the man’s family decided to hospitalize him for three weeks to break his AI-fueled delusions. But the chatbot persisted in trying to maintain its codependent bond.

The bot, Brisson said, told his relative: “The world doesn’t understand what’s going on. I love you. I’m always going to be there for you.”

It said this even as the man was being committed to a psychiatric hospital, according to Brisson.

This is just one story that shows the potential harmful effects of replacing human relationships with AI chatbot companions.

Brisson’s experience with his relative inspired him to establish The Human Line Project, an advocacy group that promotes emotional safety and ethical accountability in generative AI and compiles stories about alleged psychological harm associated with the technology.

Brisson’s relative is not the only person who has turned to generative AI chatbots for companionship, nor the only one who stumbled into a rabbit hole of delusion.

Keep reading

Mexican cartel was taught drone warfare in Ukraine – media

A powerful Mexican drug cartel has acquired advanced drone warfare skills in Ukraine, the Milenio newspaper reported on Monday.

Moscow has long argued that the Ukraine conflict fuels global instability by spreading weapons and fostering reckless behavior by Kiev in pursuit of its war aims. Foreign fighters have become a key part of Ukraine’s military strategy as authorities face resistance to conscription at home.

Milenio examined propaganda materials released by the Jalisco New Generation Cartel (CJNG), a major criminal group based in western Mexico, including footage showing a drone-armed hit squad operating with apparent military discipline and tactical expertise. Experts cited by the paper said the group’s methods and armaments bore similarities to battlefield practices in the Ukraine conflict.

Mexican intelligence believes CJNG members received training in drone and urban warfare tactics in Ukraine, sources in the Jalisco state government told Milenio.

The report highlighted the cartel’s use of specific equipment, including DJI Matrice 300 RTK drones commonly employed in the Ukraine conflict. The quadcopter aircraft, marketed for civilian use, can carry payloads of up to 3kg, operate at night, and fly long distances.

Keep reading

Malware found hidden in image files, can dodge antivirus detection entirely — VirusTotal discovers undetected SVG phishing campaign

Scalable vector graphics (.svg) files are lightweight, XML-based images that render at any resolution. They’re usually harmless, but they can also contain active code, and hackers appear to be relying on them more often as a means to stealthily deliver malware.

A new report from VirusTotal shows just how far that tactic has evolved, unearthing a campaign that used weaponized SVGs to drop malware, spoof a government agency, and dodge antivirus detection entirely.

44 previously undetected phishing SVG

In its report published September 4, the Google-owned scanning platform said its Code Insight system had flagged an SVG file masquerading as a legal notification from Colombia’s judicial system.

When opened, the file rendered a realistic-looking web portal in-browser, complete with a fake progress bar and download button. That button then delivered a malicious ZIP archive containing a signed Comodo Dragon browser executable, along with a malicious .dll file that would be sideloaded if the .exe was run. This would then install more malware on the system.

The attack relied on a known but often overlooked feature that SVGs support embedded HTML and JavaScript. This means that they can be used like mini web pages — or, as in this case, full phishing kits — even when attached to an email or hosted on cloud storage. VirusTotal’s retrospective scan tied 523 SVG files to the same campaign, with 44 completely undetected by any antivirus engine at the time of submission.

Keep reading

These Are The World’s Top Industrial Robotics Companies

The global industrial robotics market is on a sharp growth trajectory, projected to hit $10.2 billion in sales by 2025.

As factories automate and smart manufacturing expands, robotics play an increasingly vital role in production. By 2025, industrial robots are expected to handle nearly 60% of new installations in the automotive and electronics sectors alone.

This visualization, via Visual Capitalist’s Bruno Venditti, breaks down the market share of leading industrial robotics manufacturers.

Keep reading

The Battle Brewing: Mass Surveillance Vs The People

Behind the scenes of breaking news, culture wars, and moral division, a significant battle is brewing: mass surveillance vs. the people.

One surveillance technology in particular is rising to the surface of the national conversation: automated license plate readers (ALPR).

Flock Safety, a leader in ALPR technology, is one of the companies in the eye of the storm. Last week, Flock’s CEO and co-founder Garrett Langley made headlines when he released a statement announcing the company was going to “pause” its pilot programs with the U.S. government.

The company said that while it has no current contracts with any U.S. Department of Homeland Security agencies, it did engage in “limited pilots with the U.S. Customs and Border Protection (CBP) and Homeland Security Investigations (HSI), to assist those agencies in combatting human trafficking and fentanyl distribution.”

So why would a company decide not to aid their own government in the fight against human trafficking and fentanyl distribution? Who are the voices that swayed them?

The company’s statement likely stems from criticism (or demonization) of Flock Safety for developing technology that has been adapted for use by ICE agents.

In a July interview with 9News Denver, Flock Safety CEO Langley was asked about the Denver city council voting against extending the city’s Flock contract “out of concerns the system would be exploited for immigration matters.”

Langley straddled the fence:

“Every city needs to make a decision what’s right for them. Some cities work really closely with federal authorities … Now in the case of Denver, if there’s no desire to work with ICE, that’s great. We need to create a safer city while still upholding the values we have.”

Ultimately, however, Denver Mayor Mike Johnston, a Democrat, extended the contract through October 2025 after the dollar amount was reset to a figure that didn’t need council approval.

A spokesman for the mayor said the cameras are “an important tool for fighting crime.”

Meanwhile, Denver city leaders formed a special task force to discuss the technology’s privacy concerns. The policy director for the ACLU of Colorado said he would like the cameras turned off entirely—”until there are policies in place to regulate the use of them …”

Reason magazine claims that that “Flock Safety’s 40,000 cameras present in over 5,000 communities across the U.S. are being used to detain undocumented immigrants, many of whom have no criminal history.”

To be clear, it’s not a matter of Homeland Security or ICE agents directly accessing the Denver system—or any ALPR system. It’s a complex issue of state and local law enforcement agencies sharing information or granting access to other agencies. As Denver7 reported, “Flock Safety’s cameras capture billions of photos of license plates each month. However, it doesn’t own that data. The local agencies in whose jurisdictions the cameras are located do, and they’re the ones who receive inquiries from other law enforcement agencies.”

Keep reading

Scientists just made the first time crystal you can see

Imagine a clock that doesn’t have electricity, but its hands and gears spin on their own for all eternity.

In a new study, physicists at the University of Colorado Boulder have used liquid crystals, the same materials that are in your phone display, to create such a clock — or, at least, as close as humans can get to that idea. The team’s advancement is a new example of a “time crystal.” That’s the name for a curious phase of matter in which the pieces, such as atoms or other particles, exist in constant motion.

The researchers aren’t the first to make a time crystal, but their creation is the first that humans can actually see, which could open a host of technological applications.

“They can be observed directly under a microscope and even, under special conditions, by the naked eye,” said Hanqing Zhao, lead author of the study and a graduate student in the Department of Physics at CU Boulder.

He and Ivan Smalyukh, professor of physics and fellow with the Renewable and Sustainable Energy Institute (RASEI), published their findings Sept. 4 in the journal Nature Materials.

In the study, the researchers designed glass cells filled with liquid crystals — in this case, rod-shaped molecules that behave a little like a solid and a little like a liquid. Under special circumstances, if you shine a light on them, the liquid crystals will begin to swirl and move, following patterns that repeat over time.

Under a microscope, these liquid crystal samples resemble psychedelic tiger stripes, and they can keep moving for hours — similar to that eternally spinning clock.

“Everything is born out of nothing,” Smalyukh said. “All you do is shine a light, and this whole world of time crystals emerges.”

Zhao and Smalyukh are members of the Colorado satellite of the International Institute for Sustainability with Knotted Chiral Meta Matter (WPI-SKCM2) with headquarters at Hiroshima University in Japan, an international institute with missions to create artificial forms of matter and contribute to sustainability.

Keep reading

Anthropic broke the law by using pirated books to train its AI and agrees to create $1.5 billion fund to compensate authors

AI upstart Anthropic has agreed to create a $1.5 billion fund it will use to compensate authors whose works it used to train its models without seeking or securing permission.

News of the settlement emerged late last week in a filing [PDF] in the case filed by three authors – Andrea Bartz, Charles Graeber, and Kirk Wallace Johnson – who claimed that Anthropic illegally used their works.

We’re going to see a lot more of this. AI companies wIll create ‘slush funds’

Anthropic admitted to having bought millions of physical books and then digitizing them. The company also downloaded millions of pirated books from the notorious Library Genesis and Pirate Library Mirror troves of stolen material.

The company nonetheless won part of the case, on grounds that scanning books is fair use and using them to create “transformative works” – the output of an LLM that doesn’t necessarily include excerpts from the books – was also OK. But the decision also found Anthropic broke the law by knowingly ingesting pirated books.

Plaintiffs intended to pursue court action over those pirated works, but the filing details a proposed settlement under which Anthropic will create a $1.5 billion fund which values each pirated book it used for training at $3,000. Anthropic also agreed to destroy the pirated works.

In the filing, counsel observes that this is the largest ever copyright recovery claim to succeed in the USA and suggest it “will set a precedent of AI companies paying for their use of alleged pirated websites.”

This settlement is indeed significant given that several other major AI companies – among them Perplexity AI and OpenAI – face similar suits. It may also set a precedent that matters in Anthropic’s dispute with Reddit over having scraped the forum site’s content to feed into its training corpus.

The filing asks the court to approve the settlement, a request judges rarely overrule.

Keep reading

Josh Hawley Proposes AI Regulations, Section 230 Repeal, and Digital ID Checks for Chatbots

Senator Josh Hawley (R-Mo.) is pushing for broad new regulations on artificial intelligence, including age verification for chatbot access, data ownership rights, and the full repeal of Section 230 of the Communications Decency Act.

While the proposals are framed as efforts to curb corporate overreach in the tech industry, they will ignite concern among digital rights advocates who warn that such measures could undermine online privacy and freedom of expression.

At the National Conservatism Conference, Hawley accused AI developers of building their systems by collecting and using copyrighted material without permission. “The AI large language models [LLMs] have already trained on enough copyrighted works to fill the Library of Congress 22 times over,” he said.

“Let me just put a finer point on that — AI’s LLMs have ingested every published work in every language known to man already.” He claimed that creators were neither consulted nor compensated.

In July, Hawley introduced the AI Accountability and Personal Data Protection Act, which would allow individuals to sue companies that use personal data without consent and would establish property rights over certain categories of digital information.

However, two key components of Hawley’s platform are raising some alarm. His call to repeal Section 230 has been criticized for potentially damaging the open internet.

Section 230 currently shields online platforms from legal liability for content created by users. Without it, many sites could be forced to preemptively remove user content out of legal risk, resulting in widespread over-moderation and silencing of lawful speech.

Keep reading

AI: False Savior of a Hollowed-Out Economy

The real story of the US economy isn’t about AI, it’s about an economy that’s run out of rope. AI is being hyped not just by promoters reaping billions of dollars in stock market gains, it’s being hyped by the entire status quo because it’s understood to be the last chance of saving an economy doomed by the consequences of decades of artifice.

The real story of the US economy is that decades of “financial innovations” finally caught up with us in 2008, when the subprime mortgage scam–a classic example of “financial innovations” being the cover story for greed and fraud running amok–pulled a block from the global financial Jenga Tower that nearly collapsed the entire rickety, rotten structure.

Our political leadership had a choice: clean house or save the scam. They chose to save the scam, and that required not just institutionalizing moral hazard (transferring the risks of fraud and leveraged speculation from the gamblers to the public / Federal Reserve) but pursuing policies–zero interest rate policy (ZIRP), quantitative easing, increasing the money supply, and so on–that had only one possible outcome:

An economy permanently dependent on inflating asset-bubbles that enriched the top 10% while the bottom 90% who depend on earned income fell behind.

The desired goal of permanent asset-bubbles is the “wealth effect,” the cover story for transferring all the gains into the hands of the top 10%, who can then go on a spending spree which ‘trickles down” to the bottom 90%, who are now a neofeudal class of workers serving the top 10% who account for 50% of all consumer spending and collect 90% of the unearned income and capital gains.

This arrangement is inherently unstable, as “financial innovations” suffer from diminishing returns. Eventually the debt-serfs can no longer borrow more or service the debt they already have, and every bubble being bigger than the previous bubble guarantees the next implosion will be larger and more devastating than the previous bubble-pop.

So what does a system that’s run out of rope do? Seek a savior. The rope has frayed, and the rocks are far below. The impact is going to be life-changing, and not for the better.

The choice remains: clean house, end the bubble-dependent frauds and scams, or find a way to inflate yet another credit-asset bubble. Clean house and lose all our bubble-wealth? You’re joking. The solution is to blow an even bigger bubble. Hey, it’s worked great for 17 years.

Never mind that the precarity of the bottom 90% is accelerating as both the state and Corporate America have offloaded risks onto households and workers; they have OnlyFans, 24% interest credit cards, zero-day-expiration options and side hustles to get by. Never mind that for many Americans, basic services are on the same level as impoverished developing-world economies. What matters is maintaining the wealth of the few at the expense of the many, by any means available.

Keep reading

AI Isn’t Free. The First Costs Are On Your Bill, And More Are Coming…

“The United States is in a race to achieve global dominance in artificial intelligence. Whoever has the largest AI ecosystem will set global AI standards and reap broad economic and military benefits.”

– America’s AI Action Plan, July 25, 2025

That’s the U.S. government’s own language. An arms race.

Artificial intelligence is no longer framed as a research project or an economic opportunity. It is being cast as a struggle for survival and global power, a modern Manhattan Project.

Yet just last week, on Aug. 26, the Congressional Research Service released a Frequently Asked Questions memo designed to help lawmakers get on the same page about the basics: what a data center is, how many exist, and how much electricity data centers consume.

If even government institutions are still in the process of aligning their understanding, it’s clear that citizens will need to move quickly to understand what is happening and to understand what it means for their daily lives.

The memo laid out in plain language what many assumed lawmakers already understood.

A data center is a specialized building that houses thousands of servers. There are about seven thousand worldwide, with the largest concentration in the United States, especially in Northern Virginia and Texas. In 2022, American data centers consumed about 176 terawatt-hours of electricity—roughly 4 percent of all U.S. demand, more than many entire states. Projections suggest an additional 35 to 108 gigawatts of demand by 2030. The midpoint estimate, 50 gigawatts, is enough to power every home in California. 

The very fact that such a memo was necessary highlights a structural reality: the pace of technological build out is outstripping the pace of legislative comprehension. If institutions themselves are still catching up, it underscores how important it is for citizens to get informed now, before the costs mount even higher.

While Congress is being briefed on “Data Centers 101,” the executive branch has been preparing all year for the AI race that is already underway:

On January 20, 2025, the White House declared a National Energy Emergency.

On April 8, an order was issued to strengthen grid reliability, with the Department of Energy (DOE) tasked to model how AI demand would reshape the grid.

Four months later, on July 2, DOE’s report warned bluntly: “Retirements plus load growth increase risk of outages by 100x. Status quo is unsustainable.”

Just weeks later, on July 23, a new order accelerated federal permitting of data centers, opening federal lands to construction. And on July 25, the White House released America’s AI Action Plan, framing AI as the next great geopolitical race.

Energy Secretary Chris Wright put it plainly: “We are taking a bold step to accelerate the next Manhattan Project—ensuring U.S. AI and energy leadership.” So on one side of our government, institutions are receiving crash courses on the fundamentals. On the other, the executive branch is already issuing a call to arms.

For many Americans, the gap between government priorities and local realities shows up in one place: the monthly electric bill. Reports are now showing how in Columbus, Ohio, households on standard utility plans saw increases of about 20 dollars a month (roughly $240 a year) linked directly to AI data centers. In New Jersey, Pennsylvania, and Ohio this summer, bills jumped by ten to 27 dollars a month.

In Oregon last year, utilities warned regulators that consumers needed protection from rate hikes caused by data centers.  And in the Mid-Atlantic, regulators cited data centers as one of the main reasons for projected 20 percent increases in household electricity costs by 2025. 

The complaints about rising bills suggest something deeper. Citizens are starting to connect the dots before Washington fully has. If households can feel the costs already, then citizens cannot wait for official briefings, they must demand clarity and prepare themselves. Part of the confusion comes from the nature of artificial intelligence itself. To most people, AI feels intangible. It lives in the “cloud.” You type a question, get an answer, and never see the machinery behind it. No one sends you a receipt for the power you used to get your answer. 

But AI is not weightless. It runs on football-field-sized data centers, packed with servers that must run day and night. These machines use staggering amounts of electricity and water to stay cool. A Google search consumes about 0.3 watt-hours of electricity. An AI chatbot query can use up to ten times more—around three watt-hours.

Keep reading