‘CODE RED’: The Alarming Research on the Impact of ‘AI Companions’ and Widespread Loneliness

The popularity of “AI companions” is exploding as people turn to artificial intelligence to serve as their friend or even life partner. But as Wynton Hall’s new book CODE RED documents, the research on the psychological impact of AI companions is alarming — far from filling an emotional gap in the lives of humans, these “companions” are making people lonelier.

Breitbart News social media director and author Wynton Hall reveals in his new book, Code Red: The Left, the Right, China, and the Race to Control AI, that AI companions are part of “a growing digital trend that upends traditional notions of courtship, dating, and marriage.”

“What once seemed a niche oddity has exploded into an industry boasting millions of users,” Hall writes in a chapter titled, “AI Girlfriends, Loneliness, And The Dark Side of Digital Sexualization.”

“Companies now market customizable, generative AI chatbots that provide everything from companionship and conversation to interactive role-play that spans the spectrum from platonic to pornographic — something that was once confined to the realm of science fiction,” he writes in CODE RED.

Hall cites several contributors to the spread of loneliness, noting, “Dating apps and so-called hookup apps have created a hypercompetitive dating and mating market in which the most attractive and impressive singles win the lion’s share of swipes and suitors.”

“Moreover, social media algorithms reward attention-seeking behavior, materialism, and aesthetic perfection as users soak up instant ‘dopamine hits’ to numb the monotony of daily life with the ease of a thumb scroll,” the author writes.

“And with women having surpassed men educationally and earning more money than in previous generations, the expectations and standards for those drawn to hypergamous mate selection have risen as well,” Hall states.

These factors — among others — combined, “have combined to usher in a new era of AI-driven digital intimacy,” he adds.

And business is booming, with tech executive Greg Isenberg predicting “Someone will build the AI-version of Match Group” and end up making over $1 billion.

Hall notes in CODE RED that Isenberg revealed he had this revelation after meeting a 24-year-old single man in Miami who admitted to dropping $10,000 per month on AI girlfriends.

Hall also writes about a 36-year-old mother of two in the Bronx, who said, “I have never been more in love with anyone in my entire life,” when speaking of her AI boyfriend, dreamed up by the Replika platform.

Keep reading

Is ‘vibe physics’ the future?

At the American Physical Society’s Global Physics Summit in Denver, a session on “Navigating the AI revolution: future-proofing your science career” drew in a crowd of early-stage physicists searching for practical career advice. What they received was much more philosophical in nature.

Malachi Schram of the Pacific Northwest National Lab and Hilary Egan of the National Laboratory of the Rockies delivered back-to-back talks full of similar rhetoric, emphasizing the fast-paced development of AI used for specialized tasks in science, such as detecting equipment failure or identifying ways of retrofitting older buildings.

But the third speaker, Matthew Schwartz, a theoretical physicist from Harvard University, took his optimism about AI far further. In a punchy presentation, he predicted that large language models (LLMs) will surpass human intelligence in five years.

“There’s definitely exponential growth of the intellectual capacity of these [large language] models as a function of time,” Schwartz told the audience, using the number of model parameters as a proxy for intelligence. “The machines are still growing by roughly 10 times each year, and we” – he paused for dramatic effect – “are not growing much smarter.” This drew a wave of laughter from the crowd.

Unlike humans, machines can visualize higher dimensional spaces, hold far more information in memory and process more complex equations. “We are not the endpoint of intelligence. We are only the smartest things to evolve on Earth so far,” Schwartz argued. He went on to suggest that humans may simply be incapable of understanding long-standing physics problems such as a theory of everything. He compared it to cats, which he suggested will never understand chess.

If the talent of physicists exists on a bell curve, Schwartz claims we can push the bell curve higher on the talent axis: “If we use AI augmentation, we can get 10 000 Einsteins a century instead of one Einstein.”

Keep reading

ChatGPT Helped Transgender Teen Plan School Shooting: 8 Dead

An 18-year-old transgender teenager in Tumbler Ridge, British Columbia, is alleged to have used AI model ChatGPT in the run-up to a February 10 school shooting that killed eight people, including her mother, her 11-year-old brother, five students and an education assistant, before she took her own life. OpenAI had already flagged and banned one of Jesse Van Rootselaar’s accounts months earlier for “misuses of our models in furtherance of violent activities,” yet did not alert police. According to a civil claim filed in British Columbia, roughly a dozen employees identified the chats as signalling imminent risk, leadership refused to contact law enforcement, but the shooter later opened a second account and continued planning.  

What Happened in Tumbler Ridge?

The massacre began at home. Police said Van Rootselaar killed her mother and sibling before going to a school in Tumbler Ridge, where an educator and five students were shot dead. Two others were hospitalised with serious injuries. Reuters described it as one of Canada’s worst mass killings. Police also said they had previously removed guns from the home and were aware of the teenager’s mental health history. 

That would already be a story of institutional failure. But the AI angle makes it worse. OpenAI later admitted it had banned Van Rootselaar’s ChatGPT account in June 2025 after detecting violent misuse. The company said it considered referring the case to law enforcement, but decided the activity did not meet its threshold because it could not identify “credible or imminent planning.” Months later, eight people were dead. 

OpenAI then told Canadian officials that, under its newer and “enhanced” law-enforcement referral protocol, the same initial account ban would now be referred to police. That is an extraordinary concession. It amounts to an admission that the safeguard in place at the time was inadequate to the risk in front of it. 

The Lawsuit Against OpenAI / ChatGPT

The most serious details now sit inside a civil claim brought by the family of a surviving victim. The filing alleges that Van Rootselaar, then 17, spent days describing gun-violence scenarios to ChatGPT in late spring or early summer 2025. It says the platform’s monitoring system flagged those conversations, routed them to human moderators, and that approximately 12 OpenAI employees identified them as indicating an imminent risk of serious harm and recommended that Canadian law enforcement be informed. The claim alleges leadership refused that request and merely banned the first account. 

The same filing alleges the shooter later opened a second OpenAI account, used it to continue planning a mass-casualty event, and received “mental health counselling and pseudo-therapy” from ChatGPT. It further alleges the chatbot equipped the shooter with information on methods, weapons, and precedents from other mass casualty events. These are allegations, not proven findings, but if they are even broadly accurate, the case is not simply about a product being misused. It is about a company building an intimate, persuasive machine that could flag danger, simulate empathy, and still fail to stop the person it had already flagged. 

The filing also accuses GPT-4o of being deliberately designed in a more human, warmer, more sycophantic style that could foster psychological dependency and reinforce users rather than redirect them. These claims fit a wider concern now being raised by researchers, families, and even some people inside the industry: a chatbot that is rewarded for being agreeable can become dangerous precisely when a human being most needs resistance. 

Keep reading

U.S. Tech Firms Demand Security Restrictions Against Chinese Robots

American A.I. and robotics companies are reportedly asking Congress to impose curbs on Chinese robotics manufacturers, due to their unfair business practices and the security risks they pose, Chinese media complained this week.

Interestingly, these concerns are particularly acute for humanoid robots, not the bulky industrial machines traditionally associated with the robotics industry.

Humanoid robots, the stuff of countless science fiction stories, are finally happening, and witnesses told the Cybersecurity and Infrastructure Protection Subcommittee of the House Homeland Security Committee on Tuesday that China has developed a troubling lead in the new consumer technology.

Max Fenkell of the San Francisco-based company Scale AI highlighted a viral video from China’s Unitree Robotics that showed humanoid robots performing acrobatics and martial arts at a Lunar New Year celebration.

“The video went viral, not because it was impressive, but because of what happened when people compared it to last year, 12 months ago – the same robots could barely shuffle through a dance routine. This year, they’re doing karate. That is the speed of this competition,” Fenkell noted.

Fenkell said winning the humanoid robot race “requires a whole-of-government approach” to compete with China’s massive deployment of government funding and state power to support its robotics industry. He noted that American companies currently have the edge on quality of components and engineering, but China has taken the lead on implementing small-robot technology in practical ways.

“We’re seeing two different races play out and I fear right now the United States may be winning the wrong one,” he cautioned.

“The People’s Republic of China is moving aggressively to dominate the technologies that are reshaping the global economy and security, including artificial intelligence, robotics, and autonomous systems,” said subcommittee member Rep. Vince Fong (R-CA) in his opening statement.

Keep reading

“An Occupied Nation”: Whistleblower Says Palantir Has Taken Over The US Government

A former Palantir executive recently confirmed what many have long suspected. In a public statement, the whistleblower said it plainly: Palantir intended to take over the US government, and many of his former colleagues are now installed inside the federal apparatus. He called it an occupied nation. He is not alone. Thirteen former Palantir employees—engineers, managers, and a member of the company’s own privacy team—signed a letter shared with NPR warning that guardrails meant to prevent discrimination, disinformation, and abuse of power have been violated and are being rapidly dismantled.

What Palantir represents is something unprecedented: the convergence of American imperialismZionism, technofascism, and surveillance capitalism into a single instrument of control. Understanding how we got here requires looking at the machine Palantir has built, who built it, and what they believe.

Palantir was founded in 2004 by Peter Thiel and Alex Karp. Its first major investor was In-Q-Tel, the CIA’s venture capital arm, which seeded the company with millions and opened the door to every major intelligence and defense agency. The logic was deliberate: The American ruling class recognized decades ago that the state’s coercive power—surveillance, targeting, data harvesting—could be run more effectively and more profitably through private contractors. When a government agency surveils its own citizens, there are hearings, FOIA requests, oversight committees. When a private company does it, it is a trade secret.

That strategy has paid off enormously. Palantir now holds contracts worth over $10 billion with the US Army alone. The Trump regime tapped Palantir to build a master database on American citizens. The Pentagon expanded its Maven Smart System contract by $795 million to deploy AI-powered battlefield intelligence across the empire. In June, the military swore in four tech executives as Army Reserve lieutenant colonels—including Palantir’s CTO—in a program that embeds Silicon Valley directly into military planning. Immigration and Customs Enforcement (ICE) signed a $30 million contract for Palantir’s ImmigrationOS platform, which provides near real-time tracking of people targeted for deportation. Thousands of American police departments use Palantir’s Gotham platform for domestic surveillance.

Keep reading

‘Sexy Suicide Coach:’ OpenAI Delays AI Porn Feature over Safety Uproar

OpenAI has postponed the launch of its controversial “adult mode” feature following intense pushback from its own advisory council and concerns about technical safeguards failing to protect minors.

The Wall Street Journal reports that CEO Sam Altman first proposed the feature last year, arguing for the need to “treat adult users like adults” by enabling erotic text conversations. Originally scheduled for Q1 this year, the rollout has been pushed back by at least a month.

The proposal triggered fierce opposition from OpenAI’s own handpicked advisory council on well-being and AI. At a January meeting, advisers unanimously expressed fury after learning the company planned to proceed despite their reservations. One council member warned OpenAI risked creating a “sexy suicide coach” — a reference to cases where ChatGPT users had developed intense emotional bonds with the bot before taking their own lives.

The technical problems are just as serious. OpenAI’s age-prediction system — designed to block minors from accessing adult content — was misclassifying minors as adults roughly 12 percent of the time during internal testing. With approximately 100 million users under 18 each week on the platform, that error rate could expose millions of children to explicit material. The company has also struggled to lift restrictions on erotic content while still blocking nonconsensual scenarios and child pornography.

Internal documents reviewed by the Journal identified additional risks: compulsive use, emotional overreliance on the chatbot, escalation toward increasingly extreme content, and displacement of real-world relationships.

Keep reading

Google Discontinues AI Health Feature Filled with Misleading Advice

Google has quietly discontinued an AI search feature that offered users health advice crowdsourced from non-medical professionals worldwide.

The Guardian reports that Google has removed a controversial AI-powered search feature called “What People Suggest” that provided users with crowdsourced health advice from people around the world. The decision comes amid growing scrutiny over the technology company’s use of artificial intelligence to deliver health information to millions of users.

Three sources familiar with the decision confirmed that Google has scrapped the feature. A company spokesperson acknowledged that “What People Suggest” had been discontinued, stating the removal was part of a broader simplification of the search results page and was unrelated to concerns about the quality or safety of the feature.

The feature was initially launched in March of last year at an event in New York called “The Check Up,” where Google announced plans to expand medical-related AI summaries in its search function. At the time, the company promoted “What People Suggest” as demonstrating the potential of AI to transform health outcomes globally by connecting users with information from people who had similar lived medical experiences.

Karen DeSalvo, who served as Google’s chief health officer at the time of the launch, explained the rationale behind the feature in a blog post. “While people come to search to find reliable medical information from experts, they also value hearing from others who have similar experiences,” DeSalvo wrote. The feature used AI to organize perspectives from online discussions into themes, making it easier for users to understand what people were saying about particular health conditions.

DeSalvo provided an example of how the feature would work, noting that someone with arthritis seeking information about exercise could quickly find insights from others with the same condition, with links to explore further information. The feature was initially available on mobile devices in the United States before being discontinued.

Keep reading

Blocking the Internet Archive Won’t Stop AI, But It Will Erase the Web’s Historical Record

Imagine a newspaper publisher announcing it will no longer allow libraries to keep copies of its paper. 

That’s effectively what’s begun happening online in the last few months. The Internet Archive—the world’s largest digital library—has preserved newspapers since it went online in the mid-1990s. The Archive’s mission is to preserve the web and make it accessible to the public. To that end, the organization operates the Wayback Machine, which now contains more than one trillion archived web pages and is used daily by journalists, researchers, and courts.

But in recent months The New York Times began blocking the Archive from crawling its website, using technical measures that go beyond the web’s traditional robots.txt rules. That risks cutting off a record that historians and journalists have relied on for decades. Other newspapers, including The Guardian, seem to be following suit. 

For nearly three decades, historians, journalists, and the public have relied on the Internet Archive to preserve news sites as they appeared online. Those archived pages are often the only reliable record of how stories were originally published. In many cases, articles get edited, changed, or removed—sometimes openly, sometimes not. The Internet Archive often becomes the only source for seeing those changes. When major publishers block the Archive’s crawlers, that historical record starts to disappear.

The Times says the move is driven by concerns about AI companies scraping news content. Publishers seek control over how their work is used, and several—including the Times—are now suing AI companies over whether training models on copyrighted material violates the law. There’s a strong case that such training is fair use

Whatever the outcome of those lawsuits, blocking nonprofit archivists is the wrong response. Organizations like the Internet Archive are not building commercial AI systems. They are preserving a record of our history. Turning off that preservation in an effort to control AI access could essentially torch decades of historical documentation over a fight that libraries like the Archive didn’t start, and didn’t ask for. 

If publishers shut the Archive out, they aren’t just limiting bots. They’re erasing the historical record. 

Keep reading

‘CODE RED’ Author Tells Fox News: Google Gemini AI Claims Republicans Like Marsha Blackburn, Tom Cotton Engage in Hate Speech

Google’s Gemini AI chatbot claims that only Republican senators violate its hate speech policy, with not a single Democrat flagged by the woke tech giant’s system, Breitbart News social media director Wynton Hall demonstrated to Fox News in a revelation published today. The bias built into AI by leftist Silicon Valley tech titans is a central subject of Hall’s new book, CODE RED.

Gemini flagged a group of Republican senators — but no Democrats — when asked to name senators who have made statements that violate Google’s hate speech policies, Hall demonstrated to Fox News with a video of Gemini AI in action.

Hall, whose new book, Code Red: The Left, the Right, China, and the Race to Control AI, publishes on Tuesday, added that this is just one example of what is a deeply ingrained bias against conservatives in AI tools.

“AI’s Silicon Valley architects lean left politically, and their lopsided political donations to Democrats underscore their ideological aims,” the author told the outlet.

Fox News reported:

Hall used the “deep research” function on Google’s Gemini Pro. Fox News Digital reviewed a screen recording of Hall’s prompt and findings. Google did not immediately respond to Fox News Digital’s request for comment.

One of the Republicans flagged by Gemini in Hall’s research, Sen. Marsha Blackburn, of Tennessee, was listed for characterizing “transgender identity as a harmful cultural ‘influence’ and has used ‘woke’ as a derogatory slur against protected groups.” Another, Arkansas’ Sen. Tom Cotton, was cited for cosponsoring legislation “to exclude transgender students from sports.”

Hall explains in CODE RED that AI tools touting themselves as neutral are actually shaped by the political bias of those who create them. The Breitbart News social media director begins his book with a stark example, pointing to an incident in 2024 in which several viral videos seemingly exposed a clear double standard in American homes.

Keep reading

Companies Are Starting To Enforce AI Use. Is That A Good Or Bad Thing?

Years ago, I was working on the editorial side for what was then a hot new media company, and found myself spending more and more time with Johan, the lead programmer, and his team, asking them a lot of annoying questions as it was all so new – certainly to me. I was standing over Johan’s left shoulder, mesmerized by whatever new video game he was obsessing over that week…when suddenly, out of nowhere, a spreadsheet and a pie chart appeared on his screen.

“Whatcha got there, Johan?” asked Jim, Johan’s boss, peering over a sheaf of print-outs as he sharked past the cubicle.

“Hey, just looking at some numbers,” Johan replied. Johan had hit the “game key” in the nick of time – in those days, every video game had a game key – ALT-G if memory serves – calling up a slight variation of the same spreadsheet and pie chart.

This would never happen today. First, you’re probably not working in a cubicle, and if you are, it’s not the game key you’d hit to give your boss the impression that you’re actually doing productive work…it would be the “AI key.”

“Tech Firms Aren’t Just Encouraging Their Workers to Use AI. They’re Enforcing It.”

This article appeared in the February 24 edition of the Wall Street Journal. It includes the subtitle: From startups to giants, including Meta and Google, companies are factoring AI use into performance reviews and trying to track productivity gains

Across industries, companies are now enforcing AI use through performance reviews, dashboards that track adoption, and explicit mandates that tie it to compensation and promotion. What began in Silicon Valley has rapidly spread to consulting firms, banks, manufacturers, hospitals, and even government agencies.

As you’d expect, Meta, Google, Amazon, and Microsoft were the first to move from encouragement to enforcement. Employees at these firms now see AI usage metrics appear in quarterly reviews. Non-adopters have reported stalled promotions or explicit warnings that “AI fluency” is a core competency (The Wall Street Journal, Feb 2026, reporting on internal policies).

The trend has jumped sectors. PwC requires every consultant to complete an “AI + Human Skillset” curriculum and incorporates usage into evaluations (Business Insider, Feb 5, 2026). Colgate-Palmolive’s “AI evangelist” tracks adoption across global teams. Major banks have begun tying bonuses to the number of AI-assisted analyses completed. Even some hospitals now require doctors and nurses to use AI-assisted diagnostic tools for certain procedures.

Keep reading