X CRACKS DOWN on AI-Generated War Propaganda: NO MORE Cashing In on Fake Footage Without Labels

In an effort to protect truthful information during global conflicts, X has rolled out strict new rules targeting creators who peddle AI-generated videos of war without clear disclosures. This comes as pro-Iran propagandists flooded the platform with fabricated clips designed to sow chaos.

The policy shift, effective immediately focuses on X’s Creator Revenue Sharing program. Creators posting AI-made content showing armed conflicts must include a clear label, or face penalties that hit where it hurts: their wallets.

According to details from X’s head of product, Nikita Bier, the platform is clamping down hard. “During times of war, it is critical that people have access to authentic information on the ground. With today’s AI technologies, it is trivial to create content that can mislead people,” Bier stated.

Bier elaborated that X will use its Community Notes system—crowdsourced fact-checking that empowers users over elite moderators—along with post metadata to detect undeclared AI content. The rules don’t ban AI videos outright; they just demand transparency via X’s built-in “Made with AI” label option.

Violators get a 90-day suspension from earning ad revenue on their posts. A second offense leads to a permanent ban from the program. This targets those exploiting wars for profit, without stifling creative expression.

The update follows a surge in deceptive content amid the Iran-Israel clash. Pro-Iran accounts have pushed AI fakes, like one claiming Iranian missiles sank the USS Abraham Lincoln. “Iran’s IRGC claims to have struck USS Abraham Lincoln with ballistic missiles. LIE,” CENTCOM posted. “The Lincoln was not hit. The missiles launched didn’t even come close.”

Keep reading

Left-wing ideology is being encoded into AI systems to censor “wrongthink”

In 2021, a group of researchers dramatically departed OpenAI, the company behind ChatGPT. Led by Dario Amodei, OpenAI’s former vice president of research, they cited deep concerns about “AI safety.” The company was moving too fast, they warned, prioritising commercial interests over humanity’s future. The risks were said to be existential. These Effective Altruists were going to do things the right way.

Their solution? Start a new company called Anthropic, premised on building AI “the right way” with “safety” (that word will become a recurring theme), and “proper guardrails.” They initially raised hundreds of millions (today, that number is in the tens of billions) from investors who bought the pitch: we’re the good guys preventing runaway artificial general intelligence (“AGI”).

Noble, right? Except these supposed guardrails against AGI have become pretty much impossible to quantify. What we do have is an incredibly sophisticated content moderation system that filters inquiries and commands through a Silicon Valley thought bubble. It doesn’t seem like they’re trying to prevent AGI from destroying humanity, but instead, to prevent you from challenging the core tenets of their political philosophy.

Go ahead and try to generate content questioning climate ideology, the trans agenda, voter ID laws or election integrity, and watch the “safety” guardrails kick in.

This isn’t about preventing Skynet. It’s about making sure AI parrots the right opinions and associates with the right kind of people.

Now that Anthropic is its own technology giant of an AI company, they are facing the same critiques from true believers in the space. Amodei has put his principles on hold to allow for foreign investment from Gulf states with a poor human rights track record. However, the company remains guided by a secular progressive “philosopher” whose values remain entirely detached from America’s founding ideas.

Keep reading

Should People Be Free To Deal With the Department of War?

In a February 27 post titled “You Should Have Moral Qualms about Anthropic’s Claims,” Hoover Institution senior fellow and foreign policy scholar Amy Zegart challenged the ethics of a company named Anthropic. What I found refreshing is that a defense contractor’s CEO had a strong enough belief in his ethics that he was willing to forego a lucrative contract. According to Zegart, I should have moral qualms about that. I don’t and I’ll say why.

Anthropic had told the Department of War that it did not want its products used for either autonomous weapons or mass surveillance of Americans. According to Zegart, the Pentagon stated that it did not contemplate such uses. But that wasn’t enough for Dario Amodei, the CEO of Anthropic, who stated that he could not “in good conscience” accept the War Department’s assurances. Here’s Brendan Bordelon in a February 26 news item in Politico:

[Secretary of War] Hegseth met with Anthropic CEO Dario Amodei on Tuesday to deliver a warning  give the military unfettered access to its Claude AI model by Friday evening or else have the government label it a “risk” to the supply chain. The designation, typically reserved for foreign firms with ties to U.S. adversaries, could ban companies that work with the government from partnering with Anthropic.

Hegseth threatened Anthropic with designating it as a risk to the supply chain. With that label, Anthropic could be forbidden, as noted above, from working with companies that work with the government. Hegseth also, though, threatened to invoke the Defense Production Act to compel Anthropic to work with the Defense Department. A risk to the supply chain and, at the same, a firm that Hegseth wants to use? Hmmm. Bordelon quotes Dean Ball, whom he identifies as a former AI advisor in the Trump administration, noting the obvious contradiction. Said Ball, “You’re telling everyone else who supplies to the DOD you cannot use Anthropic’s models, while also saying that the DOD must use Anthropic’s models.”

Zegart cites the Politico article but doesn’t mention this contradiction. Instead, she goes after Anthropic and CEO Amodei. She writes:

There is a serious ethical question about whether one company, elected by nobody, with its own normative agenda as well as substantial global investors and customers, should be dictating the conditions of the most essential government role: protecting the lives of Americans.

But she misstates the issue. Anthropic isn’t trying to dictate the conditions of this essential government role. Anthropic is simply stating what its own limits are. If the Pentagon can find another supplier, it is free to do so and, indeed, has already done so. OpenAI has stepped up to take Anthropic’s place.

Moreover, why does Zegart think it’s important that Anthropic is elected by nobody? Does Zegart really think that companies that contemplate working with the Department of War should be elected by somebody.

Keep reading

President Trump Directs Every Federal Agency to Cease Use of Anthropic After AI Company Refuses to Comply with Pentagon’s Demands

President Trump on Friday ordered every federal agency to cease use of Anthropic AI after the company refused to comply with the Pentagon’s demands.

“THE UNITED STATES OF AMERICA WILL NEVER ALLOW A RADICAL LEFT, WOKE COMPANY TO DICTATE HOW OUR GREAT MILITARY FIGHTS AND WINS WARS! That decision belongs to YOUR COMMANDER-IN-CHIEF, and the tremendous leaders I appoint to run our Military,” Trump said.

“The Leftwing nut jobs at Anthropic have made a DISASTROUS MISTAKE trying to STRONG-ARM the Department of War, and force them to obey their Terms of Service instead of our Constitution. Their selfishness is putting AMERICAN LIVES at risk, our Troops in danger, and our National Security in JEOPARDY,” Trump said.

“Therefore, I am directing EVERY Federal Agency in the United States Government to IMMEDIATELY CEASE all use of Anthropic’s technology. We don’t need it, we don’t want it, and will not do business with them again! There will be a Six Month phase out period for Agencies like the Department of War who are using Anthropic’s products, at various levels. Anthropic better get their act together, and be helpful during this phase out period, or I will use the Full Power of the Presidency to make them comply, with major civil and criminal consequences to follow,” Trump added.

“WE will decide the fate of our Country — NOT some out-of-control, Radical Left AI company run by people who have no idea what the real World is all about. Thank you for your attention to this matter. MAKE AMERICA GREAT AGAIN!” Trump added.

Keep reading

Epstein ALIVE? Conspiracy Theories Surge Over Fortnite Logs And ‘Fake’ Prison Pics

The Jeffrey Epstein scandal just keeps expanding and evolving. Recent document dumps have reignited wild theories that the convicted sex trafficker didn’t kill himself—or perhaps didn’t die at all. 

With rampant speculation that both Epstein and Ghislaine Maxwell were intelligence assets, controlled by a “supra government” above elected officials, many are arguing nothing in their case is beyond the realms of possibility.

As we previously highlighted, Epstein has an extensive gaming history, where his “littlestjeff1” Fortnite username sparked claims of post-death logins from Israel, amplifying the alive-and-kicking narrative. 

As we reported, Epic Games debunked the original account as a hoax rename, but the gaming angle persists as a gateway for deeper conspiracies.

Ben Swann highlighted this in a recent X video, diving into “a shocking theory based on Fortnite activity linked to him.” 

Keep reading

Sam Altman Fumes That It Takes Longer to Train a Human Than an AI, Plus They Eat All That Wasteful Food

AI leaders insist they’ve got humanity’s best interests in mind. If we’re to take them at their word, then we must say: they have a really unfortunate habit of sounding like they have nothing but contempt for the human race.

The latest case in point: OpenAI CEO Sam Altman’s tone-deaf comments at an event hosted by The Indian Express — made fresh off his skin-crawlingly awkward refusal to join hands with Anthropic’s Dario Amodei on stage with other industry titans — in which he attempted to downplay critiques of AI’s environmental impact. 

For starters, he called it “unfair” to compare the energy costs of training an AI model “to how much it costs a human to do one inference query.” That’s because, as Altman explains, “it also takes a lot of energy to train a human.”

“It takes like 20 years of life and all of the food you eat during that time before you get smart,” Altman continued. “And not only that, it took the very widespread evolution of the 100 billion people that have ever lived and learned not to get eaten by predators and learned how to figure out science and whatever, to produce you.”

Measured that way, “probably AI has already caught up on an energy efficiency basis” to humans, Altman said.

Altman also fumed against claims about AI’s water consumption.

“Water is totally fake,” he began, almost taunting quote-miners. “It used to be true, we used to do evaporative cooling in data centers.” 

“But now that we don’t do that,” Altman said, you still see claims like “‘don’t use ChatGPT, it’s 17 gallons of water for each query,’ or whatever.” 

“This is completely untrue and totally insane,” he asserted. “No connection to reality.”

No one can deny that humans are costly to bring up in our industrialized age. We should be doing everything realistically possible to bring down our CO2 emissions and stop eating so much meat — but we aren’t, for a number of dispiriting systemic reasons we won’t get into today.

Regardless, at least those costs are going towards keeping human civilization ticking. All the water in agriculture will keep someone fed, and the fossil fuels we burn will keep someone warm. 

Keep reading

Leaked founder email says the quiet part out loud — Ring was built to spy on your neighborhood

Ring unveiled its controversial AI-powered Search Party feature in a multimillion-dollar ad that aired during the Super Bowl, sparking blowback from privacy advocates from left to right and dead center. Although the feature was positioned as a helpful way to track down lost pets and bring them home safely, a leaked email from Ring’s founder reveals a much more sinister purpose: Ring intended to spy on people all along.

Ring’s founder revealed the truth about Search Party

From the moment we heard about Search Party, one thing was clear — tracking down lost pets was only the tip of the iceberg. The obvious next step was clearly human surveillance. After all, Ring can already scan and identify designated humans with a feature called Familiar Faces. Once enabled, users can add the faces of friends and family to their Ring app so that their Ring camera can recognize these individuals when they stop by.

Helpful? Maybe. Creepy? When you consider the way Ring wants to secretly turn Search Party into a mass surveillance tool, most definitely.

We know this for a fact now, thanks to a leaked email written by Ring’s founder, Jamie Siminoff, himself. “I believe that the foundation we created with Search Party, first for finding dogs, will end up becoming one of the most important pieces of tech and innovation to truly unlock the impact of our mission,” he begins, pivoting immediately to a grander picture. “You can now see a future where we are able to zero out crime in neighborhoods. So many things to do to get there, but for the first time ever, we have the chance to fully complete what we started.”

Keep reading

The Household Items That Spy On You

I’m sure a lot of you saw the Ring Super Bowl commercial where they showed how Amazon’s Ring cameras surveil entire neighborhoods through a feature called Search Party. The feature allows users to upload an image of something and then all of the Ring cameras in its proximity comb through their own footage to see if that thing passed in front of them. The ad used the example of a user using Search Party to find a missing dog.

Critical observers were quick to point out that this could easily be used to spy on everyone, but it’s not just Ring that’s spying on you. Here are some common consumer items that you might not have realized are watching you right now.

Your TV. Modern TVs have something in the terms of service called Automatic Content Recognition, which basically means that your TV records everything you play on it, whether it’s Netflix, Youtube, video games or even something you watched by connecting your laptop. And disconnecting your TV from the internet doesn’t stop this. Just last month, major TV manufacturers were sued in Texas for screenshotting users’ TVs every 500 milliseconds.

Your car. New cars monitor everything from your location, how fast you drive, how hard you brake, but also things like your race, weight, health, taste in music, sexual activity and trade union membership. The HALT Drunk Driving Act mandates that by 2026, all new cars have to have a mechanism that automatically turns a car off if it deems someone unfit to drive.

Your router. Xfinity recently announced its Wifi motion feature which monitors the signal strength between the router and its connected devices like printers, phones or game consoles. If someone walks between the router and the device, the router will be able to see the signal being disrupted, effectively turning it into a motion tracker. Several ISPs are rolling out similar features and marketing them as home security tools.

Smart watches. Smart watches can be used to track your personal health like counting calories or monitoring your heart rate but that information can just as easily be used to create a detailed map of your life. It reveals when you eat, when you sleep, when you’re nervous, when you’re lying, where you were, if you were on drugs, etc. Smart watch data has already been used as evidence in criminal cases.

AI assistants. Tech companies are in a race to give their AI chat bots the power to execute real world tasks like sending emails, creating to-do lists, shopping, or planning vacations. The only catch is if you want it to shop, you have to give it your credit card information. If you want it to clean up your desktop, you have to give it access to your hard drive. AI assistant Clawdbot went viral after naive users installed it only to have their assistants taken over by hackers.

Keep reading

OpenAI failed to disclose Canadian trans shooter’s ChatGPT history in meeting with officials day after shooting

In a meeting with the British Columbia government, the day after a trans-identifying 18-year-old carried out a mass shooting in Tumbler Ridge, OpenAI did not disclose that it had been aware of concerning conversations the shooter had with its chatbot months prior.

The province said in a statement that OpenAI waited until the following day to ask its provincial contact to help connect the company with the Royal Canadian Mounted Police. OpenAI handed over evidence that the shooter was banned from using ChatGPT after its automated screening systems flagged his chats last June, according to a company statement.

A previous report found that some employees had wanted the company to alert police about posts involving gun violence, but were ignored.

The Globe and Mail reported that, according to a statement from Premier David Erby’s office, a government representative met with OpenAI employees on February 11 about its interest in opening a satellite office in Canada. The following day, OpenAI requested contact information from the RCMP.

“That request was sent to the director of policing and law-enforcement services, who connected OpenAI with the RCMP,” the statement said. “OpenAI did not inform any member of government that they had potential evidence regarding the shootings in Tumbler Ridge.”

Keep reading

Precrime: Months Before Massacre, OpenAI Worried About Canada’s Trans Mass Killer

Months before a Canadian man in a dress went on a Feb 10 rampage, killing his mother and half-brother at home before slaughtering five students and an education assistant at a secondary school where he was formerly a student, employees at OpenAI were deeply troubled by his interactions with the firm’s ChatGPT AI chatbot.   

As first reported by the Wall Street Journal, Jesse Van Rootselaar’s ChatGPT activity was flagged by the company’s automated review system. When employees took a look at what he’d been up to over a several-day period in June 2025, they were alarmed. About a dozen of them debated what they should do.

Some were convinced Van Rootselaar’s descriptions of gun-violence scenarios signaled a substantial risk of real-world bloodshed, and implored their supervisors to notify police, according to the Journal’s unnamed sources. They opted against doing so, and a spokeswoman now says they’d concluded Van Rootselaar’s posts didn’t cross the threshold of posing a credible and imminent risk of serious harm. Instead, the company decided only to ban his account. 

About seven months after his disturbing series of interactions with ChatGPT, police say he killed 8 people and injured 25 more before killing himself in the school he’d attended earlier. Van Rootselaar’s social media and YouTube accounts contained transgender symbolism as well as the online name “JessJessUwU” (a meme phrase that people may recognize from the bullet casings tied to the gay suspect charged in the assassination of Charlie Kirk). 

Keep reading