Blackburn’s TRUMP AMERICA AI Act Repeals Section 230, Expands AI Liability, and Mandates Age Verification

Senator Marsha Blackburn has introduced a 291-page legislative discussion draft that would reshape how information is allowed to exist online.

The TRUMP AMERICA AI Act, officially titled the “The Republic Unifying Meritocratic Performance Advancing Machine intelligence by Eliminating Regulatory Interstate Chaos Across American Industry” Act, bundles together Section 230 repeal, expanded AI liability, age verification mandates, and a stack of additional bills that have been circulating separately for years.

All of it is wrapped in a national AI framework that claims it is tied to President Trump’s December Executive Order. The bill is framed as pro-innovation, pro-safety, designed to “protect children, creators, conservatives, and communities” while positioning the US to win the global AI race.

What the actual 291 pages describe is a system that centralizes regulatory authority, removes the legal protections platforms currently rely on, and hands new enforcement tools to federal agencies, state attorneys general, and private litigants simultaneously.

We obtained a copy of the bill for you here.

The legal foundation of the modern internet is Section 230 of the Communications Decency Act. It shields platforms from being sued for the content that users post. Without Section 230, platforms could become legally responsible for what their users post, which could mean anything controversial, contested, or legally ambiguous becomes a liability they’ll quietly remove rather than defend.

Blackburn’s bill repeals it entirely, after a two-year transition period.

Platforms and AI developers could face lawsuits for “defective design,” “failure to warn,” or deploying systems deemed “unreasonably dangerous.”

AI platforms would be incentivized to heavily monitor users.

Keep reading

US to embed Palantir AI across entire military: Report

The Pentagon has designated Palantir’s Maven artificial intelligence system as an official program of record, in a move that will lock in the weapons-targeting technology long term across ‌the US military, Reuters reported on 21 March.

The move was announced in a letter from Deputy Secretary of War Steve ​Feinberg issued to senior Pentagon leaders and US military commanders on 9 March.

Feinberg wrote that embedding Palantir’s Maven Smart System would provide the military “with the latest tools necessary to detect, deter, and dominate our adversaries in all domains.”

Maven is the US military’s main AI system, analyzing data from satellites, drones, radars, sensors, and reports. It uses AI to interpret data and swiftly identify and strike targets like enemy vehicles, buildings, and weapons.

The White House claims US warplanes have hit more than 7,800 targets since the war on Iran began just three weeks ago.

“It is imperative that we invest now and with focus to deepen the integration of artificial intelligence (AI) across the Joint Force and establish AI-enabled decision-making as the cornerstone of our strategy,” Feinberg wrote.

During a presentation at a Palantir event earlier this month, Pentagon official Cameron Stanley illustrated how the Maven program identifies targets.

“When we started ⁠this, it literally took hours to do what you just saw,” he said.

Keep reading

‘CODE RED’: The Alarming Research on the Impact of ‘AI Companions’ and Widespread Loneliness

The popularity of “AI companions” is exploding as people turn to artificial intelligence to serve as their friend or even life partner. But as Wynton Hall’s new book CODE RED documents, the research on the psychological impact of AI companions is alarming — far from filling an emotional gap in the lives of humans, these “companions” are making people lonelier.

Breitbart News social media director and author Wynton Hall reveals in his new book, Code Red: The Left, the Right, China, and the Race to Control AI, that AI companions are part of “a growing digital trend that upends traditional notions of courtship, dating, and marriage.”

“What once seemed a niche oddity has exploded into an industry boasting millions of users,” Hall writes in a chapter titled, “AI Girlfriends, Loneliness, And The Dark Side of Digital Sexualization.”

“Companies now market customizable, generative AI chatbots that provide everything from companionship and conversation to interactive role-play that spans the spectrum from platonic to pornographic — something that was once confined to the realm of science fiction,” he writes in CODE RED.

Hall cites several contributors to the spread of loneliness, noting, “Dating apps and so-called hookup apps have created a hypercompetitive dating and mating market in which the most attractive and impressive singles win the lion’s share of swipes and suitors.”

“Moreover, social media algorithms reward attention-seeking behavior, materialism, and aesthetic perfection as users soak up instant ‘dopamine hits’ to numb the monotony of daily life with the ease of a thumb scroll,” the author writes.

“And with women having surpassed men educationally and earning more money than in previous generations, the expectations and standards for those drawn to hypergamous mate selection have risen as well,” Hall states.

These factors — among others — combined, “have combined to usher in a new era of AI-driven digital intimacy,” he adds.

And business is booming, with tech executive Greg Isenberg predicting “Someone will build the AI-version of Match Group” and end up making over $1 billion.

Hall notes in CODE RED that Isenberg revealed he had this revelation after meeting a 24-year-old single man in Miami who admitted to dropping $10,000 per month on AI girlfriends.

Hall also writes about a 36-year-old mother of two in the Bronx, who said, “I have never been more in love with anyone in my entire life,” when speaking of her AI boyfriend, dreamed up by the Replika platform.

Keep reading

Is ‘vibe physics’ the future?

At the American Physical Society’s Global Physics Summit in Denver, a session on “Navigating the AI revolution: future-proofing your science career” drew in a crowd of early-stage physicists searching for practical career advice. What they received was much more philosophical in nature.

Malachi Schram of the Pacific Northwest National Lab and Hilary Egan of the National Laboratory of the Rockies delivered back-to-back talks full of similar rhetoric, emphasizing the fast-paced development of AI used for specialized tasks in science, such as detecting equipment failure or identifying ways of retrofitting older buildings.

But the third speaker, Matthew Schwartz, a theoretical physicist from Harvard University, took his optimism about AI far further. In a punchy presentation, he predicted that large language models (LLMs) will surpass human intelligence in five years.

“There’s definitely exponential growth of the intellectual capacity of these [large language] models as a function of time,” Schwartz told the audience, using the number of model parameters as a proxy for intelligence. “The machines are still growing by roughly 10 times each year, and we” – he paused for dramatic effect – “are not growing much smarter.” This drew a wave of laughter from the crowd.

Unlike humans, machines can visualize higher dimensional spaces, hold far more information in memory and process more complex equations. “We are not the endpoint of intelligence. We are only the smartest things to evolve on Earth so far,” Schwartz argued. He went on to suggest that humans may simply be incapable of understanding long-standing physics problems such as a theory of everything. He compared it to cats, which he suggested will never understand chess.

If the talent of physicists exists on a bell curve, Schwartz claims we can push the bell curve higher on the talent axis: “If we use AI augmentation, we can get 10 000 Einsteins a century instead of one Einstein.”

Keep reading

ChatGPT Helped Transgender Teen Plan School Shooting: 8 Dead

An 18-year-old transgender teenager in Tumbler Ridge, British Columbia, is alleged to have used AI model ChatGPT in the run-up to a February 10 school shooting that killed eight people, including her mother, her 11-year-old brother, five students and an education assistant, before she took her own life. OpenAI had already flagged and banned one of Jesse Van Rootselaar’s accounts months earlier for “misuses of our models in furtherance of violent activities,” yet did not alert police. According to a civil claim filed in British Columbia, roughly a dozen employees identified the chats as signalling imminent risk, leadership refused to contact law enforcement, but the shooter later opened a second account and continued planning.  

What Happened in Tumbler Ridge?

The massacre began at home. Police said Van Rootselaar killed her mother and sibling before going to a school in Tumbler Ridge, where an educator and five students were shot dead. Two others were hospitalised with serious injuries. Reuters described it as one of Canada’s worst mass killings. Police also said they had previously removed guns from the home and were aware of the teenager’s mental health history. 

That would already be a story of institutional failure. But the AI angle makes it worse. OpenAI later admitted it had banned Van Rootselaar’s ChatGPT account in June 2025 after detecting violent misuse. The company said it considered referring the case to law enforcement, but decided the activity did not meet its threshold because it could not identify “credible or imminent planning.” Months later, eight people were dead. 

OpenAI then told Canadian officials that, under its newer and “enhanced” law-enforcement referral protocol, the same initial account ban would now be referred to police. That is an extraordinary concession. It amounts to an admission that the safeguard in place at the time was inadequate to the risk in front of it. 

The Lawsuit Against OpenAI / ChatGPT

The most serious details now sit inside a civil claim brought by the family of a surviving victim. The filing alleges that Van Rootselaar, then 17, spent days describing gun-violence scenarios to ChatGPT in late spring or early summer 2025. It says the platform’s monitoring system flagged those conversations, routed them to human moderators, and that approximately 12 OpenAI employees identified them as indicating an imminent risk of serious harm and recommended that Canadian law enforcement be informed. The claim alleges leadership refused that request and merely banned the first account. 

The same filing alleges the shooter later opened a second OpenAI account, used it to continue planning a mass-casualty event, and received “mental health counselling and pseudo-therapy” from ChatGPT. It further alleges the chatbot equipped the shooter with information on methods, weapons, and precedents from other mass casualty events. These are allegations, not proven findings, but if they are even broadly accurate, the case is not simply about a product being misused. It is about a company building an intimate, persuasive machine that could flag danger, simulate empathy, and still fail to stop the person it had already flagged. 

The filing also accuses GPT-4o of being deliberately designed in a more human, warmer, more sycophantic style that could foster psychological dependency and reinforce users rather than redirect them. These claims fit a wider concern now being raised by researchers, families, and even some people inside the industry: a chatbot that is rewarded for being agreeable can become dangerous precisely when a human being most needs resistance. 

Keep reading

U.S. Tech Firms Demand Security Restrictions Against Chinese Robots

American A.I. and robotics companies are reportedly asking Congress to impose curbs on Chinese robotics manufacturers, due to their unfair business practices and the security risks they pose, Chinese media complained this week.

Interestingly, these concerns are particularly acute for humanoid robots, not the bulky industrial machines traditionally associated with the robotics industry.

Humanoid robots, the stuff of countless science fiction stories, are finally happening, and witnesses told the Cybersecurity and Infrastructure Protection Subcommittee of the House Homeland Security Committee on Tuesday that China has developed a troubling lead in the new consumer technology.

Max Fenkell of the San Francisco-based company Scale AI highlighted a viral video from China’s Unitree Robotics that showed humanoid robots performing acrobatics and martial arts at a Lunar New Year celebration.

“The video went viral, not because it was impressive, but because of what happened when people compared it to last year, 12 months ago – the same robots could barely shuffle through a dance routine. This year, they’re doing karate. That is the speed of this competition,” Fenkell noted.

Fenkell said winning the humanoid robot race “requires a whole-of-government approach” to compete with China’s massive deployment of government funding and state power to support its robotics industry. He noted that American companies currently have the edge on quality of components and engineering, but China has taken the lead on implementing small-robot technology in practical ways.

“We’re seeing two different races play out and I fear right now the United States may be winning the wrong one,” he cautioned.

“The People’s Republic of China is moving aggressively to dominate the technologies that are reshaping the global economy and security, including artificial intelligence, robotics, and autonomous systems,” said subcommittee member Rep. Vince Fong (R-CA) in his opening statement.

Keep reading

“An Occupied Nation”: Whistleblower Says Palantir Has Taken Over The US Government

A former Palantir executive recently confirmed what many have long suspected. In a public statement, the whistleblower said it plainly: Palantir intended to take over the US government, and many of his former colleagues are now installed inside the federal apparatus. He called it an occupied nation. He is not alone. Thirteen former Palantir employees—engineers, managers, and a member of the company’s own privacy team—signed a letter shared with NPR warning that guardrails meant to prevent discrimination, disinformation, and abuse of power have been violated and are being rapidly dismantled.

What Palantir represents is something unprecedented: the convergence of American imperialismZionism, technofascism, and surveillance capitalism into a single instrument of control. Understanding how we got here requires looking at the machine Palantir has built, who built it, and what they believe.

Palantir was founded in 2004 by Peter Thiel and Alex Karp. Its first major investor was In-Q-Tel, the CIA’s venture capital arm, which seeded the company with millions and opened the door to every major intelligence and defense agency. The logic was deliberate: The American ruling class recognized decades ago that the state’s coercive power—surveillance, targeting, data harvesting—could be run more effectively and more profitably through private contractors. When a government agency surveils its own citizens, there are hearings, FOIA requests, oversight committees. When a private company does it, it is a trade secret.

That strategy has paid off enormously. Palantir now holds contracts worth over $10 billion with the US Army alone. The Trump regime tapped Palantir to build a master database on American citizens. The Pentagon expanded its Maven Smart System contract by $795 million to deploy AI-powered battlefield intelligence across the empire. In June, the military swore in four tech executives as Army Reserve lieutenant colonels—including Palantir’s CTO—in a program that embeds Silicon Valley directly into military planning. Immigration and Customs Enforcement (ICE) signed a $30 million contract for Palantir’s ImmigrationOS platform, which provides near real-time tracking of people targeted for deportation. Thousands of American police departments use Palantir’s Gotham platform for domestic surveillance.

Keep reading

‘Sexy Suicide Coach:’ OpenAI Delays AI Porn Feature over Safety Uproar

OpenAI has postponed the launch of its controversial “adult mode” feature following intense pushback from its own advisory council and concerns about technical safeguards failing to protect minors.

The Wall Street Journal reports that CEO Sam Altman first proposed the feature last year, arguing for the need to “treat adult users like adults” by enabling erotic text conversations. Originally scheduled for Q1 this year, the rollout has been pushed back by at least a month.

The proposal triggered fierce opposition from OpenAI’s own handpicked advisory council on well-being and AI. At a January meeting, advisers unanimously expressed fury after learning the company planned to proceed despite their reservations. One council member warned OpenAI risked creating a “sexy suicide coach” — a reference to cases where ChatGPT users had developed intense emotional bonds with the bot before taking their own lives.

The technical problems are just as serious. OpenAI’s age-prediction system — designed to block minors from accessing adult content — was misclassifying minors as adults roughly 12 percent of the time during internal testing. With approximately 100 million users under 18 each week on the platform, that error rate could expose millions of children to explicit material. The company has also struggled to lift restrictions on erotic content while still blocking nonconsensual scenarios and child pornography.

Internal documents reviewed by the Journal identified additional risks: compulsive use, emotional overreliance on the chatbot, escalation toward increasingly extreme content, and displacement of real-world relationships.

Keep reading

Google Discontinues AI Health Feature Filled with Misleading Advice

Google has quietly discontinued an AI search feature that offered users health advice crowdsourced from non-medical professionals worldwide.

The Guardian reports that Google has removed a controversial AI-powered search feature called “What People Suggest” that provided users with crowdsourced health advice from people around the world. The decision comes amid growing scrutiny over the technology company’s use of artificial intelligence to deliver health information to millions of users.

Three sources familiar with the decision confirmed that Google has scrapped the feature. A company spokesperson acknowledged that “What People Suggest” had been discontinued, stating the removal was part of a broader simplification of the search results page and was unrelated to concerns about the quality or safety of the feature.

The feature was initially launched in March of last year at an event in New York called “The Check Up,” where Google announced plans to expand medical-related AI summaries in its search function. At the time, the company promoted “What People Suggest” as demonstrating the potential of AI to transform health outcomes globally by connecting users with information from people who had similar lived medical experiences.

Karen DeSalvo, who served as Google’s chief health officer at the time of the launch, explained the rationale behind the feature in a blog post. “While people come to search to find reliable medical information from experts, they also value hearing from others who have similar experiences,” DeSalvo wrote. The feature used AI to organize perspectives from online discussions into themes, making it easier for users to understand what people were saying about particular health conditions.

DeSalvo provided an example of how the feature would work, noting that someone with arthritis seeking information about exercise could quickly find insights from others with the same condition, with links to explore further information. The feature was initially available on mobile devices in the United States before being discontinued.

Keep reading

Blocking the Internet Archive Won’t Stop AI, But It Will Erase the Web’s Historical Record

Imagine a newspaper publisher announcing it will no longer allow libraries to keep copies of its paper. 

That’s effectively what’s begun happening online in the last few months. The Internet Archive—the world’s largest digital library—has preserved newspapers since it went online in the mid-1990s. The Archive’s mission is to preserve the web and make it accessible to the public. To that end, the organization operates the Wayback Machine, which now contains more than one trillion archived web pages and is used daily by journalists, researchers, and courts.

But in recent months The New York Times began blocking the Archive from crawling its website, using technical measures that go beyond the web’s traditional robots.txt rules. That risks cutting off a record that historians and journalists have relied on for decades. Other newspapers, including The Guardian, seem to be following suit. 

For nearly three decades, historians, journalists, and the public have relied on the Internet Archive to preserve news sites as they appeared online. Those archived pages are often the only reliable record of how stories were originally published. In many cases, articles get edited, changed, or removed—sometimes openly, sometimes not. The Internet Archive often becomes the only source for seeing those changes. When major publishers block the Archive’s crawlers, that historical record starts to disappear.

The Times says the move is driven by concerns about AI companies scraping news content. Publishers seek control over how their work is used, and several—including the Times—are now suing AI companies over whether training models on copyrighted material violates the law. There’s a strong case that such training is fair use

Whatever the outcome of those lawsuits, blocking nonprofit archivists is the wrong response. Organizations like the Internet Archive are not building commercial AI systems. They are preserving a record of our history. Turning off that preservation in an effort to control AI access could essentially torch decades of historical documentation over a fight that libraries like the Archive didn’t start, and didn’t ask for. 

If publishers shut the Archive out, they aren’t just limiting bots. They’re erasing the historical record. 

Keep reading