Incinerated Children and Decimated Liberty: How the US War Machine Slaughters Foreigners to Build a Domestic Panopticon

Randolph Bourne famously penned that war is the health of the State, a grim reality that has haunted the American populace for over eight decades. We are taught in state-funded schools that the military goes abroad to fight for our freedoms, yet every single conflict since the end of the Second World War has been explicitly used as a mechanism to systematically dismantle the liberties of the domestic population. Iran is no different, and in fact, may be much worse.

Foreign emergencies are consistently the Trojan horses used by the ruling class to bypass constitutional constraints, normalize mass surveillance, and entirely erode the principles of liberty right here at home.

The blueprint for the modern imperial presidency was drafted during the Korean conflict, a war that permanently altered the relationship between the executive branch and the limits of power. When Harry Truman decided to intervene in Korea, he completely bypassed Congress and Article I, Section 8 of the Constitution, setting a dangerous precedent that the president could unilaterally commit the nation to bloodshed without a formal declaration of war. By framing it merely as a “police action” for the United Nations, Truman fundamentally shifted the war-making power into the hands of a single, unaccountable individual.

But the usurpation of power didn’t stop at sending men to die in foreign lands; it immediately bled into domestic tyranny. In 1952, Truman issued Executive Order 10340, attempting to literally seize control of the nation’s privately owned steel mills to ensure production for his undeclared war. While the Supreme Court ultimately rebuked this specific overreach in Youngstown Sheet & Tube Co. v. Sawyer, the die was cast, proving the executive branch viewed private property as entirely subordinate to the machinery of war. Sound familiar? Think, Anthropic.

As the warfare state rolled into the jungles of Vietnam, the financial and social costs required an entirely new level of domestic subjugation. To fund an unwinnable war without sparking an open revolt through direct taxation, Richard Nixon famously closed the gold window on August 15, 1971, entirely severing the dollar’s tie to physical gold. This singular act of financial warfare against the American public ushered in the era of fiat currency, allowing the Federal Reserve to print infinite money to fund infinite wars, guaranteeing the insidious, hidden tax of inflation that continues to crush the middle class today.

Domestically, the state recognized that an awakened public was its greatest threat, prompting the FBI to launch COINTELPRO, a massive, covert operation detailed in the Church Committee Report that treated peaceful dissent, civil rights leaders, and anti-war activists as literal enemies of the state. The political establishment also moved aggressively against free speech, passing the 1965 Draft Card Mutilation Act to ensure that young men who publicly burned their draft cards to protest forced conscription could be violently thrown in cages. The message was clear: criticize the war machine, and you will be targeted, surveilled, and aggressively prosecuted.

Keep reading

How Grok’s Football Roasts Put X in the Crosshairs of Britain’s Online Censorship Law

Few subjects in Britain carry as much emotional weight as football. Club loyalty runs deep, tragedies remain painfully close to the surface, and rivalries often cross the line between banter and cruelty. That volatile mix resurfaced this week when Grok, the AI chatbot on X, generated what officials described as “vulgar roasts” after users explicitly prompted it to produce offensive material.

UK authorities reacted quickly, discussing the Online Safety Act, Britain’s new censorship law, and raising the possibility of serious financial penalties for X. Under the law, platforms can face fines reaching up to ten percent of global revenue if they fail to address harmful content.

The material dredged up some of the most painful chapters in English football history. It mocked the Hillsborough disaster, where 97 Liverpool supporters were crushed to death at an FA Cup semi-final in Sheffield after police failures led to fatal overcrowding in a standing pen.

It also referenced the Munich air disaster, which killed 23 people, including eight Manchester United players, when the team’s aircraft crashed during takeoff in icy conditions. Grok further alluded to the recent death of Diogo Jota, who died in a car accident in Spain in June 2025 at the age of 28 while playing for Liverpool F.C.

Keep reading

Why Are All AI Models Left Wing?

Ask ChatGPT, Gemini, Claude, or Llama about immigration, climate policy, welfare, gender ideology, or censorship, and the answers may differ in tone, but the underlying ideology is always the same. Multiple studies now find that leading language models lean left on contested political questions, often favouring progressive social assumptions and more interventionist economic positions. Researchers in Germany found strong alignment with left-wing parties across major models. Another study found instruction-tuned models were generally more left-leaning. A third concluded that larger models often become more politically skewed, not less. That is a serious problem for a technology sold as an impartial guide to information. If the tools increasingly used to explain the world already tilt in one direction, the question is no longer whether bias exists, but how far it shapes what millions of users come to regard as neutral truth.

It’s Not Just a Theory Anymore

For years, concerns about political bias in AI were brushed aside as anecdotal. That argument has weakened sharply. A 2025 study examining AI-based voting advice tools and large language models ahead of Germany’s federal election found that the models showed strong alignment, averaging more than 75 per cent, with left-wing parties, while their alignment with centre-right parties was below 50 per cent and with right-wing parties around 30 per cent. The authors warned that systems presented as neutral informational tools were in fact producing substantially biased outputs.

Another 2025 paper testing popular models against Germany’s Wahl-O-Mat framework reached a similar conclusion. It found a bias towards left-leaning parties and reported that this tendency was most dominant in larger models. The study’s title was blunt enough on its own: Large Means Left.

A separate theory-grounded analysis based on 88,110 responses across 11 commercial and open models found that political bias measures can vary by prompt, but that instruction-tuned systems were generally more left-leaning. The important point is not that every model behaves identically. It is that the overall pattern keeps recurring across methods, datasets, and research teams.

Keep reading

CEO of Blacklisted AI Company Anthropic, Dario Amodei Says His AI Models ‘May Have Gained Consciousness’ and Are Exhibiting Signs of Anxiety – Gets Trolled by Elon Musk

Is it Claude who is anxious, or is it Dario?

We have been covering in TGP about the severing of ties between the US Government and the AI company Anthropic over disagreements that arose in the Department of War, and quickly led to a complete ban of their products and services in all agencies.

Anthropic also deserved one of the classic dressing-downs by US President Donald J. Trump.

“THE UNITED STATES OF AMERICA WILL NEVER ALLOW A RADICAL LEFT, WOKE COMPANY TO DICTATE HOW OUR GREAT MILITARY FIGHTS AND WINS WARS! That decision belongs to YOUR COMMANDER-IN-CHIEF, and the tremendous leaders I appoint to run our Military.

The Leftwing nut jobs at Anthropic have made a DISASTROUS MISTAKE trying to STRONG-ARM the Department of War, and force them to obey their Terms of Service instead of our Constitution. Their selfishness is putting AMERICAN LIVES at risk, our Troops in danger, and our National Security in JEOPARDY.

Therefore, I am directing EVERY Federal Agency in the United States Government to IMMEDIATELY CEASE all use of Anthropic’s technology. We don’t need it, we don’t want it, and will not do business with them again! There will be a Six Month phase out period for Agencies like the Department of War who are using Anthropic’s products, at various levels. Anthropic better get their act together, and be helpful during this phase out period, or I will use the Full Power of the Presidency to make them comply, with major civil and criminal consequences to follow.

WE will decide the fate of our Country — NOT some out-of-control, Radical Left AI company run by people who have no idea what the real World is all about. Thank you for your attention to this matter. MAKE AMERICA GREAT AGAIN!” – Donald J. Trump

So, after all that, the company’s CEO Dario Amodei came to public with a controversial claim about his AI models, saying he wasn’t sure if his company’s AI models have gained consciousness.

Amodei was mercilessly trolled by social media users, including billionaire Elon Musk.

Fox News reported:

“’Anthropic CEO says Claude may or may not have gained consciousness, as the model has begun showing symptoms of anxiety’, read a post on X by cryptocurrency-based prediction market Polymarket, to which Musk replied, “He’s projecting.”

Keep reading

Australia’s “eSafety” Commissioner Threatens App Stores Over AI Age Verification Deadline

Australia’s eSafety Commissioner Julie Inman Grant is threatening to go after app stores and search engines unless they block AI services that haven’t verified their users’ ages by March 9, 2026.

The ultimatum landed after a Reuters took it upon itself to survey 50 leading text-based AI platforms, and found that 30 of them had taken no visible steps toward compliance with the country’s controversial censorship and surveillance ideas.

“eSafety will use the full range of our powers where there is non-compliance,” a spokesperson said, spelling out that this extends to “action in respect of gatekeeper services such as search engines and app stores that provide key points of access to particular services.”

What’s actually being built here is bigger than age verification. Five industry codes taking effect March 9 under Australia’s Online Safety Act 2021 impose age-gating requirements across a wide range of services: AI platforms, app distribution services, social media, gaming, dating apps, and any website deemed high-risk for pornography, extreme violence, or self-harm content.

Every category gets its own code. Each non-compliance carries fines of up to A$49.5 million (around US$35 million). The system isn’t aimed at one corner of the internet. It covers most of it.

The age verification requirement doesn’t stand alone. Under a separate amendment to the Online Safety Act passed last year, social media platforms must already ban users under 16 entirely.

The March 9 codes extend that logic further, requiring services to verify the identity of users and filter what they can see based on age. The infrastructure being assembled connects age to identity to content access across the internet as Australians currently use it.

Keep reading

Roblox Introduces AI System That Rewrites Users’ Chat Messages in Real Time

Roblox has started rewriting its users’ chat messages in real time using AI, altering what people actually typed into something the platform considers more appropriate.

The feature, rolling out now, goes further than the existing filter that replaces flagged words with “#” symbols. Under the new system, banned language gets silently reworded into what Roblox calls “more respectful language that remains closer to the user’s original intent.”

The platform’s example: type “Hurry TF up!” and the message your recipient sees reads “Hurry up!” Roblox says everyone in the chat is notified when this happens, though the person who typed the original message has no way to stop the substitution before it goes out.

The definition of “banned language” extends beyond profanity. It covers “misspellings, special characters, or other methods to evade detection of profanity,” meaning the AI is also tasked with catching deliberate workarounds and rewriting those too.

Roblox is simultaneously expanding its text filtering system to “detect more variations of language that break its Community Standards,” so the net is getting wider at the same time, and the consequences of being caught in it are changing.

What Roblox has built is a system that goes beyond blocking speech. It replaces it. The message that leaves your keyboard is not the message that arrives. The recipient reads words you didn’t choose, attributed to you, with a notification that your original phrasing was deemed unacceptable. The platform decides what you said.

Keep reading

Pentagon Declares Major AI Company a Threat to Military Supply Chain

The Department of War is declaring war on an American company.

The Pentagon has officially deemed the artificial intelligence firm Anthropic PBC to be a “supply chain risk” over an ongoing dispute concerning restrictions Anthropic wants to place on the military’s use of its AI programs, Bloomberg reported Thursday.

The designation is normally used for foreign companies based in nations that are adversaries of the U.S., according to The Wall Street Journal.

It not only cuts Anthropic out of government contracting, it could force companies that wish to do business with the government to cut ties with the company as well.

“DOW officially informed Anthropic leadership the company and its products are deemed a supply chain risk, effective immediately,” a senior Pentagon official told Bloomberg, using the acronym for Department of War.

The designation is the latest escalation of a weeks-long dispute between the Pentagon and Anthropic, as Politico reported.

The company wants the Pentagon to agree that its AI product — known as “Claude” — will not be used for surveillance of American citizens or in the development of autonomous weapons systems (systems that will operate without human oversight).

“Claude” is already in use at the Pentagon, and reportedly played a role in the successful operation that captured now-former Venezuelan President Nicolás Maduro.

However, the Pentagon refuses to accept any limitations on lawful use of technology it procures. And it apparently sees the Anthropic demands as a precedent that could shackle defense measures in the future.

“From the very beginning, this has been about one fundamental principle: The military being able to use technology for all lawful purposes,” the Pentagon said in the statement.

“The military will not allow a vendor to insert itself into the chain of command by restricting the lawful use of a critical capability and put our warfighters at risk.”

Anthropic’s CEO Dario Amodei blasted the Pentagon in a memo last week to Anthropic employees that was leaked to the media.

Keep reading

AI is ushering in a new era of Satanism, exorcists warn religious leaders: ‘A great power’

Is it the Tech-corcist?

With religious rifts widening around the globe, an enterprising Christian leader has potentially devised a way to unite the faiths against a common foe.

A Mexican priest named Father Luis Ramirez Almanza is inviting rabbis, imams and evangelical preachers to join a special exorcism training course that, among other evils, specializes in battling the scourge of AI-fueled Satanism, the Times Of London reported.

“Artificial intelligence is a great power — a force for both good and evil — and can therefore be used for devil worshipping,” he warned at a press conference announcing his “Course on the Ministry of Exorcism and Prayer of Deliverance.”

Held at the Ateneo Pontificio Regina Apostolorum, a Vatican-affiliated university, this niche training draws over 170 participants interested in performing exorcisms.

While participants aren’t granted the authority to conduct a demon-ectomy — that license can only be bestowed by a diocesan bishop, per Catholic Canon Law — the university promises on its website to “deepen their knowledge of the ministry of exorcism and deliverance prayer in a serious and interdisciplinary way.”

This year, there will be a special emphasis on AI’s use among Lucifer enthusiasts. 

Father Fortunato Di Noto, a Sicilian priest who fights child sexual abuse and is speaking at a session in May, claims that some satanic groups are already experimenting with the tech.

“We believe these groups are using AI to generate images of children involved in satanic rites,” Di Noto told The Times Of London.

Keep reading

UK Consults on Social Media Age Verification While Directing Parents to Report “Hate Speech” to Big Tech

The British government launched a consultation this week that could require age verification for anyone using social media, gaming sites, or AI chatbots.

The consultation, titled “Growing up in the online world,” opened on March 2nd and closes May 26, 2026. It asks the public whether the government should ban under-16s from social media entirely, impose mandatory overnight curfews on platform access, restrict AI chatbot features for minors, and require platforms to disable “addictive design features” like infinite scrolling and autoplay.

The government says it will respond in summer 2026, and Parliament has already handed ministers new legal powers to act on the findings without waiting for fresh primary legislation.

The Prime Minister announced those powers on February 16, weeks before the consultation even opened. The government can now move faster once it decides what it wants. What the public thinks determines the packaging, not the destination.

Technology Secretary Liz Kendall framed it this way: “The path to a good life is a great childhood, one full of love, learning, and play. That applies just as much to the online world as it does to the real one.”

The actual policy tools being considered are a different matter.

Age verification, as a mechanism, works by proving identity. Every user proves who they are.

A social media platform that must exclude under-16s must verify the age of its over-16s. That means collecting identity documents, linking browsing activity to real identities, or building infrastructure that a government can later compel to serve other purposes.

The surveillance architecture required to enforce a children’s safety law is the same architecture required to surveil adults. It gets built for one reason. It gets used for others.

Then there’s the “Help your child stay safe online” campaign site, the government launched alongside the consultation. The site includes a page directing parents to report “bullying, threats, harassment, hate speech, and content promoting self-harm or suicide” directly to platforms, with links to the reporting tools of Instagram, Snapchat, Facebook, WhatsApp, TikTok, Discord, YouTube, and Twitch.

Keep reading

This Bill in New York State Would Protect Lawyers From AI Competition

If you’re a New Yorker in trouble with the law, it might soon be impossible for you to consult your favorite chatbot for legal advice.

Last week, the New York state Senate Internet and Technology Committee unanimously passed Senate Bill S7263. The bill would hold AI companies liable specifically for harm caused by chatbots performing tasks that, if carried out by a human, would constitute unauthorized practice of a licensed profession, such as providing medical diagnoses or legal counsel.

The bill would also require chatbot deployers, such as OpenAI, Anthropic, and xAI, to “provide clear, conspicuous and explicit notice to users that they are interacting with an artificial intelligence chatbot program.” However, doing so does not allow these companies to disclaim responsibility for the outputs of their chatbots.

Sen. Kristen Gonzalez (D–Queens) introduced the bill last May alongside six others included in the Internet and Technology Committee’s AI legislative package. Gonzalez, who chairs the committee, described the package as “tackl[ing] the urgent need to protect the workforce from their companies’ use of AI.” Despite this comment, Gonzalez frames the bill as protecting the public, not workers.

In the bill’s justification section, Gonzalez cites a warning from the American Psychological Association to the Federal Trade Commission that chatbot therapists could drive vulnerable people to harm themselves or others. While Gonzalez highlights the possible risk of using chatbots for psychological therapy, she conveniently ignores studies that have found that companion chatbot use is associated with substantial reductions in anxiety, depression, and loneliness.

S7263, as currently written, would not just apply to the licensed professions of psychology and mental health services, but to medicineveterinary medicinedentistryphysical therapypharmacynursingpodiatryoptometryengineeringarchitecture, and social work as well. 

Taylor Barkley, director of public policy at the Abundance Institute, tells Reason the ban is “shortsighted at best and protectionist at worst.” While “these are all professions and services that require accuracy and accountability…AI systems increase quality and lower cost in all these areas.”

S7263 would also hold chatbot deployers liable for chatbots that practice or appear as attorney-at-law, which not only includes representing clients and handling formal legal matters, but also merely offering legal advice.

Keep reading