Should People Be Free To Deal With the Department of War?

In a February 27 post titled “You Should Have Moral Qualms about Anthropic’s Claims,” Hoover Institution senior fellow and foreign policy scholar Amy Zegart challenged the ethics of a company named Anthropic. What I found refreshing is that a defense contractor’s CEO had a strong enough belief in his ethics that he was willing to forego a lucrative contract. According to Zegart, I should have moral qualms about that. I don’t and I’ll say why.

Anthropic had told the Department of War that it did not want its products used for either autonomous weapons or mass surveillance of Americans. According to Zegart, the Pentagon stated that it did not contemplate such uses. But that wasn’t enough for Dario Amodei, the CEO of Anthropic, who stated that he could not “in good conscience” accept the War Department’s assurances. Here’s Brendan Bordelon in a February 26 news item in Politico:

[Secretary of War] Hegseth met with Anthropic CEO Dario Amodei on Tuesday to deliver a warning  give the military unfettered access to its Claude AI model by Friday evening or else have the government label it a “risk” to the supply chain. The designation, typically reserved for foreign firms with ties to U.S. adversaries, could ban companies that work with the government from partnering with Anthropic.

Hegseth threatened Anthropic with designating it as a risk to the supply chain. With that label, Anthropic could be forbidden, as noted above, from working with companies that work with the government. Hegseth also, though, threatened to invoke the Defense Production Act to compel Anthropic to work with the Defense Department. A risk to the supply chain and, at the same, a firm that Hegseth wants to use? Hmmm. Bordelon quotes Dean Ball, whom he identifies as a former AI advisor in the Trump administration, noting the obvious contradiction. Said Ball, “You’re telling everyone else who supplies to the DOD you cannot use Anthropic’s models, while also saying that the DOD must use Anthropic’s models.”

Zegart cites the Politico article but doesn’t mention this contradiction. Instead, she goes after Anthropic and CEO Amodei. She writes:

There is a serious ethical question about whether one company, elected by nobody, with its own normative agenda as well as substantial global investors and customers, should be dictating the conditions of the most essential government role: protecting the lives of Americans.

But she misstates the issue. Anthropic isn’t trying to dictate the conditions of this essential government role. Anthropic is simply stating what its own limits are. If the Pentagon can find another supplier, it is free to do so and, indeed, has already done so. OpenAI has stepped up to take Anthropic’s place.

Moreover, why does Zegart think it’s important that Anthropic is elected by nobody? Does Zegart really think that companies that contemplate working with the Department of War should be elected by somebody.

Keep reading

President Trump Directs Every Federal Agency to Cease Use of Anthropic After AI Company Refuses to Comply with Pentagon’s Demands

President Trump on Friday ordered every federal agency to cease use of Anthropic AI after the company refused to comply with the Pentagon’s demands.

“THE UNITED STATES OF AMERICA WILL NEVER ALLOW A RADICAL LEFT, WOKE COMPANY TO DICTATE HOW OUR GREAT MILITARY FIGHTS AND WINS WARS! That decision belongs to YOUR COMMANDER-IN-CHIEF, and the tremendous leaders I appoint to run our Military,” Trump said.

“The Leftwing nut jobs at Anthropic have made a DISASTROUS MISTAKE trying to STRONG-ARM the Department of War, and force them to obey their Terms of Service instead of our Constitution. Their selfishness is putting AMERICAN LIVES at risk, our Troops in danger, and our National Security in JEOPARDY,” Trump said.

“Therefore, I am directing EVERY Federal Agency in the United States Government to IMMEDIATELY CEASE all use of Anthropic’s technology. We don’t need it, we don’t want it, and will not do business with them again! There will be a Six Month phase out period for Agencies like the Department of War who are using Anthropic’s products, at various levels. Anthropic better get their act together, and be helpful during this phase out period, or I will use the Full Power of the Presidency to make them comply, with major civil and criminal consequences to follow,” Trump added.

“WE will decide the fate of our Country — NOT some out-of-control, Radical Left AI company run by people who have no idea what the real World is all about. Thank you for your attention to this matter. MAKE AMERICA GREAT AGAIN!” Trump added.

Keep reading

Epstein ALIVE? Conspiracy Theories Surge Over Fortnite Logs And ‘Fake’ Prison Pics

The Jeffrey Epstein scandal just keeps expanding and evolving. Recent document dumps have reignited wild theories that the convicted sex trafficker didn’t kill himself—or perhaps didn’t die at all. 

With rampant speculation that both Epstein and Ghislaine Maxwell were intelligence assets, controlled by a “supra government” above elected officials, many are arguing nothing in their case is beyond the realms of possibility.

As we previously highlighted, Epstein has an extensive gaming history, where his “littlestjeff1” Fortnite username sparked claims of post-death logins from Israel, amplifying the alive-and-kicking narrative. 

As we reported, Epic Games debunked the original account as a hoax rename, but the gaming angle persists as a gateway for deeper conspiracies.

Ben Swann highlighted this in a recent X video, diving into “a shocking theory based on Fortnite activity linked to him.” 

Keep reading

Sam Altman Fumes That It Takes Longer to Train a Human Than an AI, Plus They Eat All That Wasteful Food

AI leaders insist they’ve got humanity’s best interests in mind. If we’re to take them at their word, then we must say: they have a really unfortunate habit of sounding like they have nothing but contempt for the human race.

The latest case in point: OpenAI CEO Sam Altman’s tone-deaf comments at an event hosted by The Indian Express — made fresh off his skin-crawlingly awkward refusal to join hands with Anthropic’s Dario Amodei on stage with other industry titans — in which he attempted to downplay critiques of AI’s environmental impact. 

For starters, he called it “unfair” to compare the energy costs of training an AI model “to how much it costs a human to do one inference query.” That’s because, as Altman explains, “it also takes a lot of energy to train a human.”

“It takes like 20 years of life and all of the food you eat during that time before you get smart,” Altman continued. “And not only that, it took the very widespread evolution of the 100 billion people that have ever lived and learned not to get eaten by predators and learned how to figure out science and whatever, to produce you.”

Measured that way, “probably AI has already caught up on an energy efficiency basis” to humans, Altman said.

Altman also fumed against claims about AI’s water consumption.

“Water is totally fake,” he began, almost taunting quote-miners. “It used to be true, we used to do evaporative cooling in data centers.” 

“But now that we don’t do that,” Altman said, you still see claims like “‘don’t use ChatGPT, it’s 17 gallons of water for each query,’ or whatever.” 

“This is completely untrue and totally insane,” he asserted. “No connection to reality.”

No one can deny that humans are costly to bring up in our industrialized age. We should be doing everything realistically possible to bring down our CO2 emissions and stop eating so much meat — but we aren’t, for a number of dispiriting systemic reasons we won’t get into today.

Regardless, at least those costs are going towards keeping human civilization ticking. All the water in agriculture will keep someone fed, and the fossil fuels we burn will keep someone warm. 

Keep reading

Leaked founder email says the quiet part out loud — Ring was built to spy on your neighborhood

Ring unveiled its controversial AI-powered Search Party feature in a multimillion-dollar ad that aired during the Super Bowl, sparking blowback from privacy advocates from left to right and dead center. Although the feature was positioned as a helpful way to track down lost pets and bring them home safely, a leaked email from Ring’s founder reveals a much more sinister purpose: Ring intended to spy on people all along.

Ring’s founder revealed the truth about Search Party

From the moment we heard about Search Party, one thing was clear — tracking down lost pets was only the tip of the iceberg. The obvious next step was clearly human surveillance. After all, Ring can already scan and identify designated humans with a feature called Familiar Faces. Once enabled, users can add the faces of friends and family to their Ring app so that their Ring camera can recognize these individuals when they stop by.

Helpful? Maybe. Creepy? When you consider the way Ring wants to secretly turn Search Party into a mass surveillance tool, most definitely.

We know this for a fact now, thanks to a leaked email written by Ring’s founder, Jamie Siminoff, himself. “I believe that the foundation we created with Search Party, first for finding dogs, will end up becoming one of the most important pieces of tech and innovation to truly unlock the impact of our mission,” he begins, pivoting immediately to a grander picture. “You can now see a future where we are able to zero out crime in neighborhoods. So many things to do to get there, but for the first time ever, we have the chance to fully complete what we started.”

Keep reading

The Household Items That Spy On You

I’m sure a lot of you saw the Ring Super Bowl commercial where they showed how Amazon’s Ring cameras surveil entire neighborhoods through a feature called Search Party. The feature allows users to upload an image of something and then all of the Ring cameras in its proximity comb through their own footage to see if that thing passed in front of them. The ad used the example of a user using Search Party to find a missing dog.

Critical observers were quick to point out that this could easily be used to spy on everyone, but it’s not just Ring that’s spying on you. Here are some common consumer items that you might not have realized are watching you right now.

Your TV. Modern TVs have something in the terms of service called Automatic Content Recognition, which basically means that your TV records everything you play on it, whether it’s Netflix, Youtube, video games or even something you watched by connecting your laptop. And disconnecting your TV from the internet doesn’t stop this. Just last month, major TV manufacturers were sued in Texas for screenshotting users’ TVs every 500 milliseconds.

Your car. New cars monitor everything from your location, how fast you drive, how hard you brake, but also things like your race, weight, health, taste in music, sexual activity and trade union membership. The HALT Drunk Driving Act mandates that by 2026, all new cars have to have a mechanism that automatically turns a car off if it deems someone unfit to drive.

Your router. Xfinity recently announced its Wifi motion feature which monitors the signal strength between the router and its connected devices like printers, phones or game consoles. If someone walks between the router and the device, the router will be able to see the signal being disrupted, effectively turning it into a motion tracker. Several ISPs are rolling out similar features and marketing them as home security tools.

Smart watches. Smart watches can be used to track your personal health like counting calories or monitoring your heart rate but that information can just as easily be used to create a detailed map of your life. It reveals when you eat, when you sleep, when you’re nervous, when you’re lying, where you were, if you were on drugs, etc. Smart watch data has already been used as evidence in criminal cases.

AI assistants. Tech companies are in a race to give their AI chat bots the power to execute real world tasks like sending emails, creating to-do lists, shopping, or planning vacations. The only catch is if you want it to shop, you have to give it your credit card information. If you want it to clean up your desktop, you have to give it access to your hard drive. AI assistant Clawdbot went viral after naive users installed it only to have their assistants taken over by hackers.

Keep reading

OpenAI failed to disclose Canadian trans shooter’s ChatGPT history in meeting with officials day after shooting

In a meeting with the British Columbia government, the day after a trans-identifying 18-year-old carried out a mass shooting in Tumbler Ridge, OpenAI did not disclose that it had been aware of concerning conversations the shooter had with its chatbot months prior.

The province said in a statement that OpenAI waited until the following day to ask its provincial contact to help connect the company with the Royal Canadian Mounted Police. OpenAI handed over evidence that the shooter was banned from using ChatGPT after its automated screening systems flagged his chats last June, according to a company statement.

A previous report found that some employees had wanted the company to alert police about posts involving gun violence, but were ignored.

The Globe and Mail reported that, according to a statement from Premier David Erby’s office, a government representative met with OpenAI employees on February 11 about its interest in opening a satellite office in Canada. The following day, OpenAI requested contact information from the RCMP.

“That request was sent to the director of policing and law-enforcement services, who connected OpenAI with the RCMP,” the statement said. “OpenAI did not inform any member of government that they had potential evidence regarding the shootings in Tumbler Ridge.”

Keep reading

Precrime: Months Before Massacre, OpenAI Worried About Canada’s Trans Mass Killer

Months before a Canadian man in a dress went on a Feb 10 rampage, killing his mother and half-brother at home before slaughtering five students and an education assistant at a secondary school where he was formerly a student, employees at OpenAI were deeply troubled by his interactions with the firm’s ChatGPT AI chatbot.   

As first reported by the Wall Street Journal, Jesse Van Rootselaar’s ChatGPT activity was flagged by the company’s automated review system. When employees took a look at what he’d been up to over a several-day period in June 2025, they were alarmed. About a dozen of them debated what they should do.

Some were convinced Van Rootselaar’s descriptions of gun-violence scenarios signaled a substantial risk of real-world bloodshed, and implored their supervisors to notify police, according to the Journal’s unnamed sources. They opted against doing so, and a spokeswoman now says they’d concluded Van Rootselaar’s posts didn’t cross the threshold of posing a credible and imminent risk of serious harm. Instead, the company decided only to ban his account. 

About seven months after his disturbing series of interactions with ChatGPT, police say he killed 8 people and injured 25 more before killing himself in the school he’d attended earlier. Van Rootselaar’s social media and YouTube accounts contained transgender symbolism as well as the online name “JessJessUwU” (a meme phrase that people may recognize from the bullet casings tied to the gay suspect charged in the assassination of Charlie Kirk). 

Keep reading

How Developers Are Making AI Your Kid’s Third Parent In The Classroom

Under Roman law a father held a legal power called patria potestas, or “total ownership,” of his children. He could sell them, deny them property, or abandon a newborn on a hillside. The child was not a person but property under the law. What a surprise then that the so-called “paternalistic” Apostle Paul upended five centuries of that system in a single verse when he wrote “Fathers do not exasperate your children; instead bring them up in the training and instruction of the Lord” (Ephesians 6:4). Roman law already demanded obedience to the father under pater familias. So Paul’s revolutionary challenge to the system was not to challenge obedience, but rather to tell the man holding absolute power he had a duty to the best interests of the child rather than himself.

Paul’s words to the Ephesians shaped Western family law for two millennia, including modern American case law (see Pierce v. Society of Sisters, 1925; Wisconsin v. Yoder, 1972). But today a different authority has moved into the space between parent and child; not a patriarch but an “aithority” — an algorithm built by the largest technology corporations on earth and dropped into American classrooms through a partnership with the teachers unions. Nobody sent a permission slip home.

The scale of “the aithority” in schools is already exasperating. In late 2025, Google announced its Gemini AI education tools had reached more than 10 million students across more than 1,000 U.S. institutions. The company rolled out more than 150 new AI features in a single year, trained more than 1 million educators for free, and embedded AI tutoring modules directly into Google Classroom. Separately, Google invested $1 billion in college-level AI integration. In June 2025 the American Federation of Teachers (AFT), the second-largest teachers union in the country, announced a partnership with OpenAI, Microsoft, and Anthropic to accelerate AI adoption in classrooms nationwide. That deal was negotiated between union leadership and three of the most powerful AI companies on earth. Parents were not at the table.

Keep reading

Farmer Hailed as Hero for Rejecting Huge Payment to Turn His Land Into a Giant Data Center

The immense hype surrounding AI has caused enormous data centers to crop up across the country, triggering significant opposition. It’s not just the loss of land: enormous power needs are pushing the grid into meltdown and driving up local electricity prices, catching the attention of politicians and their irate constituents.

One 86-year-old farmer in Cumberland County, Pennsylvania, has heard enough. As local Fox affiliate WPMT reports, Mervin Raudabaugh, who has farmed the surrounding land for more than 60 years, turned down more than $15 million from data center developers in a package deal that involved three neighboring property owners as well.

The farmer was offered $60,000 per acre to build a data center on his property. But giving up his family legacy wasn’t in the cards for him.

“I was not interested in destroying my farms,” he told WPMT. “That was the bottom line. It really wasn’t so much the economic end of it. I just didn’t want to see these two farms destroyed.”

Instead, he sold the development rights in December for just under $2 million to a conservation trust, taking a significant loss but guaranteeing that it would stay farmland in perpetuity.

Users on social media called him a “legend,” and argued he had “more integrity than the whole government.”

“Now that is a real hero in these gutless times!” another user tweeted.

“$15M is huge, but clean water, quiet land, and legacy don’t have a price tag,” another user argued.

The sheer amount of land being earmarked to construct enormous energy and water-sucking data centers is remarkable. A data center in Mount Pleasant, Wisconsin, is set to take up 600 acres, which could cost local residents their land, as ABC News reported this week. Another octogenarian farmer, the 83-year-old Tom Uttech, who has lived on his 52-acre Wisconsin property for almost 40 years, told the broadcaster that he “couldn’t believe” that a local utility company was looking to build “power lines that are 300 or something feet tall, taller than apparently the Statue of Liberty,” through his land to power the data center.

Per ABC, there are more than 3,000 data centers in the US, a number that will soon grow by 1,200 more, which are currently being constructed.

Keep reading