CEO of Blacklisted AI Company Anthropic, Dario Amodei Says His AI Models ‘May Have Gained Consciousness’ and Are Exhibiting Signs of Anxiety – Gets Trolled by Elon Musk

Is it Claude who is anxious, or is it Dario?

We have been covering in TGP about the severing of ties between the US Government and the AI company Anthropic over disagreements that arose in the Department of War, and quickly led to a complete ban of their products and services in all agencies.

Anthropic also deserved one of the classic dressing-downs by US President Donald J. Trump.

“THE UNITED STATES OF AMERICA WILL NEVER ALLOW A RADICAL LEFT, WOKE COMPANY TO DICTATE HOW OUR GREAT MILITARY FIGHTS AND WINS WARS! That decision belongs to YOUR COMMANDER-IN-CHIEF, and the tremendous leaders I appoint to run our Military.

The Leftwing nut jobs at Anthropic have made a DISASTROUS MISTAKE trying to STRONG-ARM the Department of War, and force them to obey their Terms of Service instead of our Constitution. Their selfishness is putting AMERICAN LIVES at risk, our Troops in danger, and our National Security in JEOPARDY.

Therefore, I am directing EVERY Federal Agency in the United States Government to IMMEDIATELY CEASE all use of Anthropic’s technology. We don’t need it, we don’t want it, and will not do business with them again! There will be a Six Month phase out period for Agencies like the Department of War who are using Anthropic’s products, at various levels. Anthropic better get their act together, and be helpful during this phase out period, or I will use the Full Power of the Presidency to make them comply, with major civil and criminal consequences to follow.

WE will decide the fate of our Country — NOT some out-of-control, Radical Left AI company run by people who have no idea what the real World is all about. Thank you for your attention to this matter. MAKE AMERICA GREAT AGAIN!” – Donald J. Trump

So, after all that, the company’s CEO Dario Amodei came to public with a controversial claim about his AI models, saying he wasn’t sure if his company’s AI models have gained consciousness.

Amodei was mercilessly trolled by social media users, including billionaire Elon Musk.

Fox News reported:

“’Anthropic CEO says Claude may or may not have gained consciousness, as the model has begun showing symptoms of anxiety’, read a post on X by cryptocurrency-based prediction market Polymarket, to which Musk replied, “He’s projecting.”

Keep reading

Australia’s “eSafety” Commissioner Threatens App Stores Over AI Age Verification Deadline

Australia’s eSafety Commissioner Julie Inman Grant is threatening to go after app stores and search engines unless they block AI services that haven’t verified their users’ ages by March 9, 2026.

The ultimatum landed after a Reuters took it upon itself to survey 50 leading text-based AI platforms, and found that 30 of them had taken no visible steps toward compliance with the country’s controversial censorship and surveillance ideas.

“eSafety will use the full range of our powers where there is non-compliance,” a spokesperson said, spelling out that this extends to “action in respect of gatekeeper services such as search engines and app stores that provide key points of access to particular services.”

What’s actually being built here is bigger than age verification. Five industry codes taking effect March 9 under Australia’s Online Safety Act 2021 impose age-gating requirements across a wide range of services: AI platforms, app distribution services, social media, gaming, dating apps, and any website deemed high-risk for pornography, extreme violence, or self-harm content.

Every category gets its own code. Each non-compliance carries fines of up to A$49.5 million (around US$35 million). The system isn’t aimed at one corner of the internet. It covers most of it.

The age verification requirement doesn’t stand alone. Under a separate amendment to the Online Safety Act passed last year, social media platforms must already ban users under 16 entirely.

The March 9 codes extend that logic further, requiring services to verify the identity of users and filter what they can see based on age. The infrastructure being assembled connects age to identity to content access across the internet as Australians currently use it.

Keep reading

Roblox Introduces AI System That Rewrites Users’ Chat Messages in Real Time

Roblox has started rewriting its users’ chat messages in real time using AI, altering what people actually typed into something the platform considers more appropriate.

The feature, rolling out now, goes further than the existing filter that replaces flagged words with “#” symbols. Under the new system, banned language gets silently reworded into what Roblox calls “more respectful language that remains closer to the user’s original intent.”

The platform’s example: type “Hurry TF up!” and the message your recipient sees reads “Hurry up!” Roblox says everyone in the chat is notified when this happens, though the person who typed the original message has no way to stop the substitution before it goes out.

The definition of “banned language” extends beyond profanity. It covers “misspellings, special characters, or other methods to evade detection of profanity,” meaning the AI is also tasked with catching deliberate workarounds and rewriting those too.

Roblox is simultaneously expanding its text filtering system to “detect more variations of language that break its Community Standards,” so the net is getting wider at the same time, and the consequences of being caught in it are changing.

What Roblox has built is a system that goes beyond blocking speech. It replaces it. The message that leaves your keyboard is not the message that arrives. The recipient reads words you didn’t choose, attributed to you, with a notification that your original phrasing was deemed unacceptable. The platform decides what you said.

Keep reading

Pentagon Declares Major AI Company a Threat to Military Supply Chain

The Department of War is declaring war on an American company.

The Pentagon has officially deemed the artificial intelligence firm Anthropic PBC to be a “supply chain risk” over an ongoing dispute concerning restrictions Anthropic wants to place on the military’s use of its AI programs, Bloomberg reported Thursday.

The designation is normally used for foreign companies based in nations that are adversaries of the U.S., according to The Wall Street Journal.

It not only cuts Anthropic out of government contracting, it could force companies that wish to do business with the government to cut ties with the company as well.

“DOW officially informed Anthropic leadership the company and its products are deemed a supply chain risk, effective immediately,” a senior Pentagon official told Bloomberg, using the acronym for Department of War.

The designation is the latest escalation of a weeks-long dispute between the Pentagon and Anthropic, as Politico reported.

The company wants the Pentagon to agree that its AI product — known as “Claude” — will not be used for surveillance of American citizens or in the development of autonomous weapons systems (systems that will operate without human oversight).

“Claude” is already in use at the Pentagon, and reportedly played a role in the successful operation that captured now-former Venezuelan President Nicolás Maduro.

However, the Pentagon refuses to accept any limitations on lawful use of technology it procures. And it apparently sees the Anthropic demands as a precedent that could shackle defense measures in the future.

“From the very beginning, this has been about one fundamental principle: The military being able to use technology for all lawful purposes,” the Pentagon said in the statement.

“The military will not allow a vendor to insert itself into the chain of command by restricting the lawful use of a critical capability and put our warfighters at risk.”

Anthropic’s CEO Dario Amodei blasted the Pentagon in a memo last week to Anthropic employees that was leaked to the media.

Keep reading

AI is ushering in a new era of Satanism, exorcists warn religious leaders: ‘A great power’

Is it the Tech-corcist?

With religious rifts widening around the globe, an enterprising Christian leader has potentially devised a way to unite the faiths against a common foe.

A Mexican priest named Father Luis Ramirez Almanza is inviting rabbis, imams and evangelical preachers to join a special exorcism training course that, among other evils, specializes in battling the scourge of AI-fueled Satanism, the Times Of London reported.

“Artificial intelligence is a great power — a force for both good and evil — and can therefore be used for devil worshipping,” he warned at a press conference announcing his “Course on the Ministry of Exorcism and Prayer of Deliverance.”

Held at the Ateneo Pontificio Regina Apostolorum, a Vatican-affiliated university, this niche training draws over 170 participants interested in performing exorcisms.

While participants aren’t granted the authority to conduct a demon-ectomy — that license can only be bestowed by a diocesan bishop, per Catholic Canon Law — the university promises on its website to “deepen their knowledge of the ministry of exorcism and deliverance prayer in a serious and interdisciplinary way.”

This year, there will be a special emphasis on AI’s use among Lucifer enthusiasts. 

Father Fortunato Di Noto, a Sicilian priest who fights child sexual abuse and is speaking at a session in May, claims that some satanic groups are already experimenting with the tech.

“We believe these groups are using AI to generate images of children involved in satanic rites,” Di Noto told The Times Of London.

Keep reading

UK Consults on Social Media Age Verification While Directing Parents to Report “Hate Speech” to Big Tech

The British government launched a consultation this week that could require age verification for anyone using social media, gaming sites, or AI chatbots.

The consultation, titled “Growing up in the online world,” opened on March 2nd and closes May 26, 2026. It asks the public whether the government should ban under-16s from social media entirely, impose mandatory overnight curfews on platform access, restrict AI chatbot features for minors, and require platforms to disable “addictive design features” like infinite scrolling and autoplay.

The government says it will respond in summer 2026, and Parliament has already handed ministers new legal powers to act on the findings without waiting for fresh primary legislation.

The Prime Minister announced those powers on February 16, weeks before the consultation even opened. The government can now move faster once it decides what it wants. What the public thinks determines the packaging, not the destination.

Technology Secretary Liz Kendall framed it this way: “The path to a good life is a great childhood, one full of love, learning, and play. That applies just as much to the online world as it does to the real one.”

The actual policy tools being considered are a different matter.

Age verification, as a mechanism, works by proving identity. Every user proves who they are.

A social media platform that must exclude under-16s must verify the age of its over-16s. That means collecting identity documents, linking browsing activity to real identities, or building infrastructure that a government can later compel to serve other purposes.

The surveillance architecture required to enforce a children’s safety law is the same architecture required to surveil adults. It gets built for one reason. It gets used for others.

Then there’s the “Help your child stay safe online” campaign site, the government launched alongside the consultation. The site includes a page directing parents to report “bullying, threats, harassment, hate speech, and content promoting self-harm or suicide” directly to platforms, with links to the reporting tools of Instagram, Snapchat, Facebook, WhatsApp, TikTok, Discord, YouTube, and Twitch.

Keep reading

This Bill in New York State Would Protect Lawyers From AI Competition

If you’re a New Yorker in trouble with the law, it might soon be impossible for you to consult your favorite chatbot for legal advice.

Last week, the New York state Senate Internet and Technology Committee unanimously passed Senate Bill S7263. The bill would hold AI companies liable specifically for harm caused by chatbots performing tasks that, if carried out by a human, would constitute unauthorized practice of a licensed profession, such as providing medical diagnoses or legal counsel.

The bill would also require chatbot deployers, such as OpenAI, Anthropic, and xAI, to “provide clear, conspicuous and explicit notice to users that they are interacting with an artificial intelligence chatbot program.” However, doing so does not allow these companies to disclaim responsibility for the outputs of their chatbots.

Sen. Kristen Gonzalez (D–Queens) introduced the bill last May alongside six others included in the Internet and Technology Committee’s AI legislative package. Gonzalez, who chairs the committee, described the package as “tackl[ing] the urgent need to protect the workforce from their companies’ use of AI.” Despite this comment, Gonzalez frames the bill as protecting the public, not workers.

In the bill’s justification section, Gonzalez cites a warning from the American Psychological Association to the Federal Trade Commission that chatbot therapists could drive vulnerable people to harm themselves or others. While Gonzalez highlights the possible risk of using chatbots for psychological therapy, she conveniently ignores studies that have found that companion chatbot use is associated with substantial reductions in anxiety, depression, and loneliness.

S7263, as currently written, would not just apply to the licensed professions of psychology and mental health services, but to medicineveterinary medicinedentistryphysical therapypharmacynursingpodiatryoptometryengineeringarchitecture, and social work as well. 

Taylor Barkley, director of public policy at the Abundance Institute, tells Reason the ban is “shortsighted at best and protectionist at worst.” While “these are all professions and services that require accuracy and accountability…AI systems increase quality and lower cost in all these areas.”

S7263 would also hold chatbot deployers liable for chatbots that practice or appear as attorney-at-law, which not only includes representing clients and handling formal legal matters, but also merely offering legal advice.

Keep reading

AI-Generated Art Can’t Receive Copyright Protection After Supreme Court Declines Case

The advancement of AI-generated art suffered a crucial blow this week when the Supreme Court left in place a lower-court ruling that such works cannot be protected under U.S. copyright law.

The original plaintiff, a computer scientist from Missouri named Stephen Thaler, appealed to the Supreme Court after “lower courts upheld a U.S. Copyright Office ​decision that the AI-crafted visual art at issue in the case was ineligible for copyright protection ​because it did not have a human creator,” per Reuters.

Thaler, of St. Charles, Missouri, applied for ⁠a federal copyright registration in 2018 covering “A Recent Entrance to Paradise,” visual art he said his AI ​technology “DABUS” created. The image shows train tracks entering a portal, surrounded by what appears to be green and ​purple plant imagery.

The Copyright Office rejected his application in 2022, finding that creative works must have human authors to be eligible to receive a copyright.

According to The Verge, the U.S. Copyright Office issued new guidance last year saying that AI-generated could not enjoy copyright protection, potentially destroying the profitability of text prompts with no original source material. Thaler had also tried to patent his AI-generative works, which has also faced several legal challenges.

“The US federal circuit court similarly determined that AI systems can’t patent inventions because they aren’t human, which the US Patent Office reaffirmed in 2024 with new guidance, stating that while AI systems can’t be listed as inventors on a patent, people can still use AI-powered tools to develop them,” noted The Verge.

Thaler’s lawyers argued admitted that the Supreme Court’s rejection could likely hurt the advancement of AI-generated artworks.

“Even if it later overturns the Copyright Office’s test in another case, it will be too late,” Thaler’s lawyers claimed. “The Copyright Office ​will have irreversibly and negatively impacted AI development and use in the creative ​industry during ⁠critically important years.”

Without copyright protection, AI-generated works would fall under public domain, allowing anyone to copy, sell, or use, essentially destroying the potential to create commercial intellectual property.

Keep reading

X CRACKS DOWN on AI-Generated War Propaganda: NO MORE Cashing In on Fake Footage Without Labels

In an effort to protect truthful information during global conflicts, X has rolled out strict new rules targeting creators who peddle AI-generated videos of war without clear disclosures. This comes as pro-Iran propagandists flooded the platform with fabricated clips designed to sow chaos.

The policy shift, effective immediately focuses on X’s Creator Revenue Sharing program. Creators posting AI-made content showing armed conflicts must include a clear label, or face penalties that hit where it hurts: their wallets.

According to details from X’s head of product, Nikita Bier, the platform is clamping down hard. “During times of war, it is critical that people have access to authentic information on the ground. With today’s AI technologies, it is trivial to create content that can mislead people,” Bier stated.

Bier elaborated that X will use its Community Notes system—crowdsourced fact-checking that empowers users over elite moderators—along with post metadata to detect undeclared AI content. The rules don’t ban AI videos outright; they just demand transparency via X’s built-in “Made with AI” label option.

Violators get a 90-day suspension from earning ad revenue on their posts. A second offense leads to a permanent ban from the program. This targets those exploiting wars for profit, without stifling creative expression.

The update follows a surge in deceptive content amid the Iran-Israel clash. Pro-Iran accounts have pushed AI fakes, like one claiming Iranian missiles sank the USS Abraham Lincoln. “Iran’s IRGC claims to have struck USS Abraham Lincoln with ballistic missiles. LIE,” CENTCOM posted. “The Lincoln was not hit. The missiles launched didn’t even come close.”

Keep reading

Left-wing ideology is being encoded into AI systems to censor “wrongthink”

In 2021, a group of researchers dramatically departed OpenAI, the company behind ChatGPT. Led by Dario Amodei, OpenAI’s former vice president of research, they cited deep concerns about “AI safety.” The company was moving too fast, they warned, prioritising commercial interests over humanity’s future. The risks were said to be existential. These Effective Altruists were going to do things the right way.

Their solution? Start a new company called Anthropic, premised on building AI “the right way” with “safety” (that word will become a recurring theme), and “proper guardrails.” They initially raised hundreds of millions (today, that number is in the tens of billions) from investors who bought the pitch: we’re the good guys preventing runaway artificial general intelligence (“AGI”).

Noble, right? Except these supposed guardrails against AGI have become pretty much impossible to quantify. What we do have is an incredibly sophisticated content moderation system that filters inquiries and commands through a Silicon Valley thought bubble. It doesn’t seem like they’re trying to prevent AGI from destroying humanity, but instead, to prevent you from challenging the core tenets of their political philosophy.

Go ahead and try to generate content questioning climate ideology, the trans agenda, voter ID laws or election integrity, and watch the “safety” guardrails kick in.

This isn’t about preventing Skynet. It’s about making sure AI parrots the right opinions and associates with the right kind of people.

Now that Anthropic is its own technology giant of an AI company, they are facing the same critiques from true believers in the space. Amodei has put his principles on hold to allow for foreign investment from Gulf states with a poor human rights track record. However, the company remains guided by a secular progressive “philosopher” whose values remain entirely detached from America’s founding ideas.

Keep reading