CEO of Blacklisted AI Company Anthropic, Dario Amodei Says His AI Models ‘May Have Gained Consciousness’ and Are Exhibiting Signs of Anxiety – Gets Trolled by Elon Musk

Is it Claude who is anxious, or is it Dario?

We have been covering in TGP about the severing of ties between the US Government and the AI company Anthropic over disagreements that arose in the Department of War, and quickly led to a complete ban of their products and services in all agencies.

Anthropic also deserved one of the classic dressing-downs by US President Donald J. Trump.

“THE UNITED STATES OF AMERICA WILL NEVER ALLOW A RADICAL LEFT, WOKE COMPANY TO DICTATE HOW OUR GREAT MILITARY FIGHTS AND WINS WARS! That decision belongs to YOUR COMMANDER-IN-CHIEF, and the tremendous leaders I appoint to run our Military.

The Leftwing nut jobs at Anthropic have made a DISASTROUS MISTAKE trying to STRONG-ARM the Department of War, and force them to obey their Terms of Service instead of our Constitution. Their selfishness is putting AMERICAN LIVES at risk, our Troops in danger, and our National Security in JEOPARDY.

Therefore, I am directing EVERY Federal Agency in the United States Government to IMMEDIATELY CEASE all use of Anthropic’s technology. We don’t need it, we don’t want it, and will not do business with them again! There will be a Six Month phase out period for Agencies like the Department of War who are using Anthropic’s products, at various levels. Anthropic better get their act together, and be helpful during this phase out period, or I will use the Full Power of the Presidency to make them comply, with major civil and criminal consequences to follow.

WE will decide the fate of our Country — NOT some out-of-control, Radical Left AI company run by people who have no idea what the real World is all about. Thank you for your attention to this matter. MAKE AMERICA GREAT AGAIN!” – Donald J. Trump

So, after all that, the company’s CEO Dario Amodei came to public with a controversial claim about his AI models, saying he wasn’t sure if his company’s AI models have gained consciousness.

Amodei was mercilessly trolled by social media users, including billionaire Elon Musk.

Fox News reported:

“’Anthropic CEO says Claude may or may not have gained consciousness, as the model has begun showing symptoms of anxiety’, read a post on X by cryptocurrency-based prediction market Polymarket, to which Musk replied, “He’s projecting.”

Keep reading

Australia’s “eSafety” Commissioner Threatens App Stores Over AI Age Verification Deadline

Australia’s eSafety Commissioner Julie Inman Grant is threatening to go after app stores and search engines unless they block AI services that haven’t verified their users’ ages by March 9, 2026.

The ultimatum landed after a Reuters took it upon itself to survey 50 leading text-based AI platforms, and found that 30 of them had taken no visible steps toward compliance with the country’s controversial censorship and surveillance ideas.

“eSafety will use the full range of our powers where there is non-compliance,” a spokesperson said, spelling out that this extends to “action in respect of gatekeeper services such as search engines and app stores that provide key points of access to particular services.”

What’s actually being built here is bigger than age verification. Five industry codes taking effect March 9 under Australia’s Online Safety Act 2021 impose age-gating requirements across a wide range of services: AI platforms, app distribution services, social media, gaming, dating apps, and any website deemed high-risk for pornography, extreme violence, or self-harm content.

Every category gets its own code. Each non-compliance carries fines of up to A$49.5 million (around US$35 million). The system isn’t aimed at one corner of the internet. It covers most of it.

The age verification requirement doesn’t stand alone. Under a separate amendment to the Online Safety Act passed last year, social media platforms must already ban users under 16 entirely.

The March 9 codes extend that logic further, requiring services to verify the identity of users and filter what they can see based on age. The infrastructure being assembled connects age to identity to content access across the internet as Australians currently use it.

Keep reading

Brutal Numbers: Schools Spent $30 Billion on Laptops… and They Seem to Have Made Kids Dumber

Technological innovation doesn’t always yield good results.

Even as electronic devices are championed as the best means of learning for youth — with a massive price tag — we aren’t seeing dramatic improvements in students’ performance.

On Feb. 23, Techspot published an article citing the beginning of the tech takeover in the classroom under former Maine Democratic Gov. Angus King.

In 2002, King created a program to put Apple laptops in middle schoolers’ repertoire. By 2024, the federal government had used a staggering $30 billion to follow his state’s plan, getting tablets and laptops to students across the country.

This seemed like an obvious shift in the right direction on paper: The world is becoming more technological. Students will use these devices in the workplace, so why not familiarize them now?

But neuroscientist Jared Cooney Horvath laid out the adverse impacts of this decision to the U.S. Senate Committee on Commerce, Science, and Transportation.

According to Horvath, Gen Z is the first cohort to see declining test scores compared to their predecessors. He found an inverse relationship between academic performance and time using digital devices.

“This is not a debate about rejecting technology,” he told lawmakers. “It is a question of aligning educational tools with how human learning actually works. Evidence indicates that indiscriminate digital expansion has weakened learning environments rather than strengthened them.”

Techspot cited studies showing 3,000 university students spent two-thirds of time on their school laptops engaging in material unrelated to classwork.

Fortune found that in 2017, test scores weren’t improving after King’s program.

A study published in OxJournal made a worrying conclusion regarding technology and attention deficit hyperactivity disorder.

The research “established an evident correlation between digital media use and the prevalence of ADHD in contemporary society. This applies for all age demographics, depending on the setting, such as being in school or in a workplace.”

“The earlier we immerse our children’s underdeveloped minds in digital media, offering them instant fulfillment, the higher the likelihood that an attention-deficit disorder will emerge as they mature,” the study continued.

“This inhibits individuals from focusing their selective attention on a particular task, as well as reduces their divided and sustained attention.”

A traditionally minded educator — or most conservatives — could have seen this coming.

Keep reading

Roblox Introduces AI System That Rewrites Users’ Chat Messages in Real Time

Roblox has started rewriting its users’ chat messages in real time using AI, altering what people actually typed into something the platform considers more appropriate.

The feature, rolling out now, goes further than the existing filter that replaces flagged words with “#” symbols. Under the new system, banned language gets silently reworded into what Roblox calls “more respectful language that remains closer to the user’s original intent.”

The platform’s example: type “Hurry TF up!” and the message your recipient sees reads “Hurry up!” Roblox says everyone in the chat is notified when this happens, though the person who typed the original message has no way to stop the substitution before it goes out.

The definition of “banned language” extends beyond profanity. It covers “misspellings, special characters, or other methods to evade detection of profanity,” meaning the AI is also tasked with catching deliberate workarounds and rewriting those too.

Roblox is simultaneously expanding its text filtering system to “detect more variations of language that break its Community Standards,” so the net is getting wider at the same time, and the consequences of being caught in it are changing.

What Roblox has built is a system that goes beyond blocking speech. It replaces it. The message that leaves your keyboard is not the message that arrives. The recipient reads words you didn’t choose, attributed to you, with a notification that your original phrasing was deemed unacceptable. The platform decides what you said.

Keep reading

Scientists Just Taught Lab-Grown Brain Cells to Play Video Games — and People Are Freaked Out

Researchers say they have successfully trained living human brain cells to play the classic video game Doom, marking the latest experiment in so-called “biological computing.”

Australian biotech company Cortical Labs announced the development in a recent demonstration showing neurons grown in a laboratory interacting with the 1993 first-person shooter.

The experiment builds on earlier work from 2022, when the company revealed that clusters of human brain cells grown in a petri dish had learned to play the much simpler video game Pong.

Those early experiments involved “mini-brains” made up of roughly 800,000 to one million living human neurons.

According to the company, the cells demonstrated the ability to adapt and learn basic tasks in real time.

Now researchers say they have taken the technology further by teaching the neurons to interact with the far more complex environment of Doom, a three-dimensional game that requires movement, targeting, and exploration.

To make the system work, engineers translated the digital signals from the game into patterns of electrical stimulation that the neurons could interpret.

“So we showed that biological neurons could play the game Pong,” Cortical Labs chief scientific officer Brett Kagan explained in a video announcement.

“This was a massive milestone because it demonstrated adaptive, real-time, goal-directed learning.”

“Doom was much more complex,” he added. “It’s 3D. It has enemies. It needs to explore, its an environment, and it’s hard.”

Keep reading

Pentagon Declares Major AI Company a Threat to Military Supply Chain

The Department of War is declaring war on an American company.

The Pentagon has officially deemed the artificial intelligence firm Anthropic PBC to be a “supply chain risk” over an ongoing dispute concerning restrictions Anthropic wants to place on the military’s use of its AI programs, Bloomberg reported Thursday.

The designation is normally used for foreign companies based in nations that are adversaries of the U.S., according to The Wall Street Journal.

It not only cuts Anthropic out of government contracting, it could force companies that wish to do business with the government to cut ties with the company as well.

“DOW officially informed Anthropic leadership the company and its products are deemed a supply chain risk, effective immediately,” a senior Pentagon official told Bloomberg, using the acronym for Department of War.

The designation is the latest escalation of a weeks-long dispute between the Pentagon and Anthropic, as Politico reported.

The company wants the Pentagon to agree that its AI product — known as “Claude” — will not be used for surveillance of American citizens or in the development of autonomous weapons systems (systems that will operate without human oversight).

“Claude” is already in use at the Pentagon, and reportedly played a role in the successful operation that captured now-former Venezuelan President Nicolás Maduro.

However, the Pentagon refuses to accept any limitations on lawful use of technology it procures. And it apparently sees the Anthropic demands as a precedent that could shackle defense measures in the future.

“From the very beginning, this has been about one fundamental principle: The military being able to use technology for all lawful purposes,” the Pentagon said in the statement.

“The military will not allow a vendor to insert itself into the chain of command by restricting the lawful use of a critical capability and put our warfighters at risk.”

Anthropic’s CEO Dario Amodei blasted the Pentagon in a memo last week to Anthropic employees that was leaked to the media.

Keep reading

AI is ushering in a new era of Satanism, exorcists warn religious leaders: ‘A great power’

Is it the Tech-corcist?

With religious rifts widening around the globe, an enterprising Christian leader has potentially devised a way to unite the faiths against a common foe.

A Mexican priest named Father Luis Ramirez Almanza is inviting rabbis, imams and evangelical preachers to join a special exorcism training course that, among other evils, specializes in battling the scourge of AI-fueled Satanism, the Times Of London reported.

“Artificial intelligence is a great power — a force for both good and evil — and can therefore be used for devil worshipping,” he warned at a press conference announcing his “Course on the Ministry of Exorcism and Prayer of Deliverance.”

Held at the Ateneo Pontificio Regina Apostolorum, a Vatican-affiliated university, this niche training draws over 170 participants interested in performing exorcisms.

While participants aren’t granted the authority to conduct a demon-ectomy — that license can only be bestowed by a diocesan bishop, per Catholic Canon Law — the university promises on its website to “deepen their knowledge of the ministry of exorcism and deliverance prayer in a serious and interdisciplinary way.”

This year, there will be a special emphasis on AI’s use among Lucifer enthusiasts. 

Father Fortunato Di Noto, a Sicilian priest who fights child sexual abuse and is speaking at a session in May, claims that some satanic groups are already experimenting with the tech.

“We believe these groups are using AI to generate images of children involved in satanic rites,” Di Noto told The Times Of London.

Keep reading

UK Consults on Social Media Age Verification While Directing Parents to Report “Hate Speech” to Big Tech

The British government launched a consultation this week that could require age verification for anyone using social media, gaming sites, or AI chatbots.

The consultation, titled “Growing up in the online world,” opened on March 2nd and closes May 26, 2026. It asks the public whether the government should ban under-16s from social media entirely, impose mandatory overnight curfews on platform access, restrict AI chatbot features for minors, and require platforms to disable “addictive design features” like infinite scrolling and autoplay.

The government says it will respond in summer 2026, and Parliament has already handed ministers new legal powers to act on the findings without waiting for fresh primary legislation.

The Prime Minister announced those powers on February 16, weeks before the consultation even opened. The government can now move faster once it decides what it wants. What the public thinks determines the packaging, not the destination.

Technology Secretary Liz Kendall framed it this way: “The path to a good life is a great childhood, one full of love, learning, and play. That applies just as much to the online world as it does to the real one.”

The actual policy tools being considered are a different matter.

Age verification, as a mechanism, works by proving identity. Every user proves who they are.

A social media platform that must exclude under-16s must verify the age of its over-16s. That means collecting identity documents, linking browsing activity to real identities, or building infrastructure that a government can later compel to serve other purposes.

The surveillance architecture required to enforce a children’s safety law is the same architecture required to surveil adults. It gets built for one reason. It gets used for others.

Then there’s the “Help your child stay safe online” campaign site, the government launched alongside the consultation. The site includes a page directing parents to report “bullying, threats, harassment, hate speech, and content promoting self-harm or suicide” directly to platforms, with links to the reporting tools of Instagram, Snapchat, Facebook, WhatsApp, TikTok, Discord, YouTube, and Twitch.

Keep reading

This Bill in New York State Would Protect Lawyers From AI Competition

If you’re a New Yorker in trouble with the law, it might soon be impossible for you to consult your favorite chatbot for legal advice.

Last week, the New York state Senate Internet and Technology Committee unanimously passed Senate Bill S7263. The bill would hold AI companies liable specifically for harm caused by chatbots performing tasks that, if carried out by a human, would constitute unauthorized practice of a licensed profession, such as providing medical diagnoses or legal counsel.

The bill would also require chatbot deployers, such as OpenAI, Anthropic, and xAI, to “provide clear, conspicuous and explicit notice to users that they are interacting with an artificial intelligence chatbot program.” However, doing so does not allow these companies to disclaim responsibility for the outputs of their chatbots.

Sen. Kristen Gonzalez (D–Queens) introduced the bill last May alongside six others included in the Internet and Technology Committee’s AI legislative package. Gonzalez, who chairs the committee, described the package as “tackl[ing] the urgent need to protect the workforce from their companies’ use of AI.” Despite this comment, Gonzalez frames the bill as protecting the public, not workers.

In the bill’s justification section, Gonzalez cites a warning from the American Psychological Association to the Federal Trade Commission that chatbot therapists could drive vulnerable people to harm themselves or others. While Gonzalez highlights the possible risk of using chatbots for psychological therapy, she conveniently ignores studies that have found that companion chatbot use is associated with substantial reductions in anxiety, depression, and loneliness.

S7263, as currently written, would not just apply to the licensed professions of psychology and mental health services, but to medicineveterinary medicinedentistryphysical therapypharmacynursingpodiatryoptometryengineeringarchitecture, and social work as well. 

Taylor Barkley, director of public policy at the Abundance Institute, tells Reason the ban is “shortsighted at best and protectionist at worst.” While “these are all professions and services that require accuracy and accountability…AI systems increase quality and lower cost in all these areas.”

S7263 would also hold chatbot deployers liable for chatbots that practice or appear as attorney-at-law, which not only includes representing clients and handling formal legal matters, but also merely offering legal advice.

Keep reading

Pentagon in Talks on Buying Ukrainian-Made Interceptors To Counter Iranian Drones

The Pentagon is in talks on purchasing Ukrainian-made interceptors to counter Iranian drones, the Financial Times has reported, as senior US officials have told Congress that US forces in the Middle East are having more trouble intercepting Iran’s drones than expected.

The report said that at least one Gulf country was also in talks on acquiring Ukrainian-made drone interceptors as they been using advanced US Patriot missiles, which cost more than $4 million apiece. The Ukrainian systems are much cheaper and have been designed to counter the Russian version of Iran’s Shahed drones.

One Ukrainian official said that the talks were “sensitive” but that it was “obvious that there is a surge in interest in the Ukrainian drone interceptors, which can intercept the Shahed for a very low cost.” The Ukrainian drone interceptors cost a few thousand dollars to stop one Iranian Shahed drone, which is estimated to cost about $30,000 each.

Ukrainian President Volodymyr Zelensky said earlier this week that he has been in contact with Qatar and the UAE about the use of Ukraine’s anti-drone systems, though he expressed concern about Ukraine’s own stockpile.

“Ukraine’s expertise in countering ‘Shahed’ drones is currently the most advanced in the world,” he said. “However, any such co-operation aimed at protecting our partners can only proceed without diminishing our own defence capabilities.”

Ukraine’s anti-drone technology, which includes smaller drones and anti-drone guns, has struggled to intercept the Geran-3, a Russian-produced drone based on Iran’s Shahed-238 and powered by a jet engine.

Secretary of Pete Hegseth and Chairman of the Joint Chiefs of Staff Dan Caine told lawmakers this week that US forces will not be able to intercept all of Iran’s drones and that more US casualties are expected.

Since the US and Israel launched the war against Iran on Saturday, US Central Command has confirmed the deaths of six American soldiers, who were killed by an Iranian drone that hit a makeshift operations center in Kuwait. According to media reports, they had no notice or warning to evacuate before the drone struck.

Keep reading