‘CODE RED’ Author Tells Fox News: Google Gemini AI Claims Republicans Like Marsha Blackburn, Tom Cotton Engage in Hate Speech

Google’s Gemini AI chatbot claims that only Republican senators violate its hate speech policy, with not a single Democrat flagged by the woke tech giant’s system, Breitbart News social media director Wynton Hall demonstrated to Fox News in a revelation published today. The bias built into AI by leftist Silicon Valley tech titans is a central subject of Hall’s new book, CODE RED.

Gemini flagged a group of Republican senators — but no Democrats — when asked to name senators who have made statements that violate Google’s hate speech policies, Hall demonstrated to Fox News with a video of Gemini AI in action.

Hall, whose new book, Code Red: The Left, the Right, China, and the Race to Control AI, publishes on Tuesday, added that this is just one example of what is a deeply ingrained bias against conservatives in AI tools.

“AI’s Silicon Valley architects lean left politically, and their lopsided political donations to Democrats underscore their ideological aims,” the author told the outlet.

Fox News reported:

Hall used the “deep research” function on Google’s Gemini Pro. Fox News Digital reviewed a screen recording of Hall’s prompt and findings. Google did not immediately respond to Fox News Digital’s request for comment.

One of the Republicans flagged by Gemini in Hall’s research, Sen. Marsha Blackburn, of Tennessee, was listed for characterizing “transgender identity as a harmful cultural ‘influence’ and has used ‘woke’ as a derogatory slur against protected groups.” Another, Arkansas’ Sen. Tom Cotton, was cited for cosponsoring legislation “to exclude transgender students from sports.”

Hall explains in CODE RED that AI tools touting themselves as neutral are actually shaped by the political bias of those who create them. The Breitbart News social media director begins his book with a stark example, pointing to an incident in 2024 in which several viral videos seemingly exposed a clear double standard in American homes.

Keep reading

Companies Are Starting To Enforce AI Use. Is That A Good Or Bad Thing?

Years ago, I was working on the editorial side for what was then a hot new media company, and found myself spending more and more time with Johan, the lead programmer, and his team, asking them a lot of annoying questions as it was all so new – certainly to me. I was standing over Johan’s left shoulder, mesmerized by whatever new video game he was obsessing over that week…when suddenly, out of nowhere, a spreadsheet and a pie chart appeared on his screen.

“Whatcha got there, Johan?” asked Jim, Johan’s boss, peering over a sheaf of print-outs as he sharked past the cubicle.

“Hey, just looking at some numbers,” Johan replied. Johan had hit the “game key” in the nick of time – in those days, every video game had a game key – ALT-G if memory serves – calling up a slight variation of the same spreadsheet and pie chart.

This would never happen today. First, you’re probably not working in a cubicle, and if you are, it’s not the game key you’d hit to give your boss the impression that you’re actually doing productive work…it would be the “AI key.”

“Tech Firms Aren’t Just Encouraging Their Workers to Use AI. They’re Enforcing It.”

This article appeared in the February 24 edition of the Wall Street Journal. It includes the subtitle: From startups to giants, including Meta and Google, companies are factoring AI use into performance reviews and trying to track productivity gains

Across industries, companies are now enforcing AI use through performance reviews, dashboards that track adoption, and explicit mandates that tie it to compensation and promotion. What began in Silicon Valley has rapidly spread to consulting firms, banks, manufacturers, hospitals, and even government agencies.

As you’d expect, Meta, Google, Amazon, and Microsoft were the first to move from encouragement to enforcement. Employees at these firms now see AI usage metrics appear in quarterly reviews. Non-adopters have reported stalled promotions or explicit warnings that “AI fluency” is a core competency (The Wall Street Journal, Feb 2026, reporting on internal policies).

The trend has jumped sectors. PwC requires every consultant to complete an “AI + Human Skillset” curriculum and incorporates usage into evaluations (Business Insider, Feb 5, 2026). Colgate-Palmolive’s “AI evangelist” tracks adoption across global teams. Major banks have begun tying bonuses to the number of AI-assisted analyses completed. Even some hospitals now require doctors and nurses to use AI-assisted diagnostic tools for certain procedures.

Keep reading

The Dark Side of AI: Innocent Grandmother Wrongfully Jailed for 6 Months After Facial Recognition Error

A Tennessee grandmother spent nearly six months behind bars in North Dakota, a state she had never even stepped foot in, after being wrongfully identified by AI facial recognition technology in a bank fraud investigation.

The Grand Forks Herald reports that Angela Lipps, a 50-year-old mother of three and grandmother of five from Tennessee, found herself trapped in a nightmare that began last July when U.S. Marshals arrested her at gunpoint while she was babysitting four young children. Fargo police had used facial recognition software to identify her as the primary suspect in an organized bank fraud case, despite the fact that she had never set foot in North Dakota.

The case began in April and May 2025 when Fargo Police Department detectives investigated several bank fraud incidents. Surveillance footage captured a woman using a fraudulent U.S. Army military identification card to withdraw tens of thousands of dollars from local banks. To identify the suspect, investigators employed facial recognition software, which incorrectly matched the woman in the videos to Lipps.

According to court documents obtained through an open records request, the detective assigned to the case reviewed Lipps’ social media accounts and Tennessee driver’s license photo after receiving the facial recognition match. In the charging document, the detective stated that Lipps appeared to be the suspect based on facial features, body type, hairstyle, and hair color. Notably, no one from the Fargo Police Department contacted Lipps to question her before filing charges.

Lipps was arrested on July 14 and booked into her county jail in Tennessee as a fugitive from justice. She faced four counts of unauthorized use of personal identifying information and four counts of theft in North Dakota. Held without bail due to her fugitive status, Lipps spent 108 days in the Tennessee jail before North Dakota officers transported her to Fargo on October 30.

“It was so scary, I can still see it in my head, over and over again,” Lipps said during an interview about her ordeal.

Keep reading

AI Can Now Unmask Anonymous Internet Users, New Study Finds

It looks like AI can now unmask any anonymous account on the internet. That’s according to a new study by Simon Lermen (MATS), Daniel Paleka (ETH Zurich), Joshua Swanson (ETH Zurich), Michael Aerni (ETH Zurich), Nicholas Carlini (Anthropic), and Florian Tramèr (ETH Zurich), published on arXiv.

In the paper, “Large-Scale Online Deanonymization with LLMs,” the researchers show that modern large language models (LLMs) can re-identify people behind pseudonymous online accounts at a scale and accuracy that far surpass previous techniques.

The core contribution is an automated deanonymization pipeline powered by LLMs, according to the new study. Instead of relying on structured datasets or hand-engineered features—like earlier attacks on the Netflix Prize dataset—the system works directly on raw, unstructured text.

Given posts, comments, or interview transcripts written under a pseudonym, the pipeline extracts identity-relevant signals, searches for likely matches using semantic embeddings, and then uses higher-level reasoning to verify the most promising candidates while filtering out false positives. The result is a scalable attack that mirrors—and in some cases exceeds—the effectiveness of a dedicated human investigator.

To evaluate their approach, the researchers constructed three datasets with known ground truth. The first links pseudonymous Hacker News users to real-world LinkedIn profiles, relying on cross-platform clues embedded in public text. The second matches users across movie discussion communities on Reddit. The third takes a single Reddit user’s history, splits it into two time-separated profiles, and tests whether the system can reconnect them.

Across all three settings, LLM-based methods dramatically outperformed classical baselines, which often achieved near-zero recall.

The headline numbers are striking. In some experiments, the system achieved up to 68% recall at 90% precision—meaning it correctly identified a substantial portion of targets while keeping false accusations low. Even when matching temporally split Reddit accounts separated by a year, performance remained strong. In contrast, traditional non-LLM approaches struggled to produce meaningful matches. The findings suggest that advances in reasoning and representation learning have transformed deanonymization from a niche, data-hungry attack into a broadly applicable capability.

Keep reading

Palantir CEO Says AI Will Take Power Away From Democratic Voters and Toward Working-Class Men

Palantir CEO Alex Karp has said that artificial intelligence (AI) could shift economic influence away from highly educated voters who tend to support Democrats and toward vocationally trained, working-class men.

In an interview with CNBC, Karp discussed the broader societal impact of artificial intelligence and how it is expected to transform employment.

“This technology disrupts humanities-trained, largely Democratic voters, and makes their economic power less.”

“And increases the economic power of vocationally trained, working-class, often male, uh voters,” Karp said.

“So these disruptions are gonna disrupt every aspect of our society,” he said.

“To make this work, we have to come to an agreement of what it is we’re going to do with the technology; how are we gonna explain to people who are likely gonna have less good, and less interesting jobs.”

Keep reading

Ben Affleck Once Criticized AI, Now Netflix Is Buying His AI Startup For $600 Million

Ben Affleck—who has previously warned about the risks artificial intelligence poses to Hollywood—has sold his own AI filmmaking startup to Netflix in a deal that could reach $600 million, according to Bloomberg.

The cash portion of the acquisition is smaller, with additional payments tied to performance targets, but it still ranks among the largest AI-focused deals by a major studio.

The startup, InterPositive, developed software designed to help directors edit footage after filming, such as removing stray objects or changing elements in the background. The tools are intended to work with existing film rather than generate entirely new content. Director David Fincher has already used the technology on an upcoming movie starring Brad Pitt.

Netflix’s purchase highlights how studios are increasingly turning to artificial intelligence to streamline production and reduce costs. Rivals such as Amazon and The Walt Disney Company are also exploring AI tools for film and television development.

Bloomberg writes that Affleck built InterPositive with backing from RedBird Capital Partners and initially kept the project quiet before seeking investors in 2025. He has argued the technology should function as a controlled filmmaking aid: the system trains only on footage from a specific film and doesn’t scrape outside movies or generate new works independently.

Keep reading

AI Won’t Fix America’s Looming Debt Crisis

Last month, Congress sparred with the president over a partial budget, but with few real cuts, America’s slow march toward an epic debt crisis went on undeterred. With over $38 trillion in debt and interest payments exceeding defense or Medicare spending, one would expect lawmakers to confront reality and do the difficult work needed to restore fiscal sanity. But why would they? Cutting entitlements and increasing middle-class taxes rarely make for winning campaign slogans.

It’s no surprise, then, that some prefer to pin their hopes on AI as America’s fiscal savior. Vanguard’s chief economist Joe Davis argued there’s as high as a 50 percent chance AI will prevent a debt-driven economic malaise. Elon Musk voiced a similar conclusion late last year, claiming AI and robotics are “the only thing that’s going to solve the US debt crisis.”

The argument goes like this: an AI boom drives explosive economic growth and tax revenue, while, at the same time, productivity gains impressively offset any upward pressure on interest rates. The deficit becomes a surplus and the overall debt shrinks, possibly disappearing entirely.

If that sounds less like a policy plan and more like a retirement strategy built around winning the lottery, you’re not wrong. The entire scenario hinges on a massive if: that AI generates extraordinary revenue and does it quickly enough to outrun rising interest costs.

But even if the government hits the tax revenue jackpot before Congress drives us off a fiscal cliff, it would be naïve to assume lawmakers would pay down the debt. 

The More the Government Gets, the More the Government Spends

For the sake of argument, suppose the tech optimists are right, and the federal government enjoys a massive AI-driven revenue windfall. Understanding what happens next requires understanding the incentives of politicians and their voters.

This is where public choice shines. Rather than assuming politicians and voters act in everyone’s best interest, this branch of economics recognizes that people don’t become angels once they interface with the government. Incentives matter, especially for politicians.

Incentives are why we have a deficit in the first place. The public isn’t particularly interested in financial restraint because high spending and low taxes benefit them now, and the resulting debt is some future generation’s problem. Politicians surely see the crisis brewing, but solving it is a sure way to get voted out of office. And so the incentive is to run constant deficits and grow the debt year after year, decade after decade.

Without changing incentives, it will be hard to avoid spending new revenue. Ballooning coffers mean voters will demand that the government dole out more goodies (especially if AI displaces workers along the way). Washington already excels at entertaining expensive ideas: healthcare subsidies for well-off families, a universal basic income, generous tax cuts, a fifty-percent increase in military spending, all despite the pushback the current deficit’s able to muster. Imagine the wish list after it drops even a little.

Keep reading

OpenAI on Surveillance and Autonomous Killings: You’re Going to Have to Trust Us

OpenAI claims it has accomplished what Anthropic couldn’t: securing a Pentagon contract that won’t cross professed red lines against dragnet domestic spying and the use of artificial intelligence to order lethal military strikes. Just don’t expect any proof.

Sam Altman, OpenAI’s CEO, announced the company’s big win with the Defense Department in a post on X on February 27.

“Two of our most important safety principles are prohibitions on domestic mass surveillance and human responsibility for the use of force, including for autonomous weapon systems,” he wrote. The Pentagon “agrees with these principles, reflects them in law and policy, and we put them into our agreement.”

The deal came after the very public implosion of what was to be a similar contract between the U.S. military and Anthropic, one of OpenAI’s chief rivals. Anthropic had said negotiations collapsed because it could not enshrine prohibitions against killer robots and domestic spying in its contract. The company’s insistence on these two points earned it the wrath of the Pentagon and President Donald Trump, who ordered the government to phase out use of Anthropic’s tools within six months.

But if the government booted Anthropic for refusing mass surveillance and autonomous weapons, how could OpenAI take over the contract without having the same problem?

OpenAI has attempted to square this circle through a string of posts to X by company executives and researchers, including Katrina Mulligan, its national security chief, and a claim by Altman that the company negotiated stricter protections around domestic surveillance.

The company and the government, however, are not releasing the only proof that matters: the contract itself.

The Department of Defense did not respond to a request for comment.

Keep reading

Online age-verification tools spread across U.S. for child safety, but adults are being surveilled

New U.S laws designed to protect minors are pulling millions of adult Americans into mandatory age-verification gates to access online content, leading to backlash from users and criticism from privacy advocates that a free and open internet is at stake. Roughly half of U.S. states have enacted or are advancing laws requiring platforms — including adult content sites, online gaming services, and social media apps — to block underage users, forcing companies to screen everyone who approaches these digital gates.

“There’s a big spectrum,” said Joe Kaufmann, global head of privacy at Jumio, one of the largest digital identity-verification and authentication platforms. He explained that the patchwork of state laws vary in technical demands and compliance expectations. “The regulations are moving in many different directions at once,” he said.  

Social media company Discord announced plans in February to roll out mandatory age verification globally, which the company said would rely on verification methods designed so facial analysis occurs on a user’s device and submitted data would be deleted immediately. The proposal quickly drew backlash from users concerned about having to submit selfies or government IDs to access certain features, which led Discord to delay the launch until the second half of this year.

“Let me be upfront: we knew this rollout was going to be controversial. Any time you introduce something that touches identity and verification, people are going to have strong feelings,” Discord chief technology officer and co-founder Stanislav Vishnevskiy wrote in a Feb. 24 blog post.

Websites offering adult content, gambling, or financial services often rely on full identity verification that requires scanning a government ID and matching it to a live image. But most of the verification systems powering these checkpoints — often run by specialized identity-verification vendors on behalf of websites — rely on artificial intelligence such as facial recognition and age-estimation models that analyze selfies or video to determine in seconds whether someone is old enough to access content. Social media and lower-risk services may use lighter estimation tools designed to confirm age without permanently storing detailed identity records.  

Vendors say a challenge is balancing safety with how much friction users will tolerate. “We’re in the business of ensuring that you are absolutely keeping minors safe and out and able to let adults in with as little friction as possible,” said Rivka Gewirtz Little, chief growth officer at identity-verification platform Socure. Excessive data collection, she added, creates friction that users resist. 

Keep reading

Incinerated Children and Decimated Liberty: How the US War Machine Slaughters Foreigners to Build a Domestic Panopticon

Randolph Bourne famously penned that war is the health of the State, a grim reality that has haunted the American populace for over eight decades. We are taught in state-funded schools that the military goes abroad to fight for our freedoms, yet every single conflict since the end of the Second World War has been explicitly used as a mechanism to systematically dismantle the liberties of the domestic population. Iran is no different, and in fact, may be much worse.

Foreign emergencies are consistently the Trojan horses used by the ruling class to bypass constitutional constraints, normalize mass surveillance, and entirely erode the principles of liberty right here at home.

The blueprint for the modern imperial presidency was drafted during the Korean conflict, a war that permanently altered the relationship between the executive branch and the limits of power. When Harry Truman decided to intervene in Korea, he completely bypassed Congress and Article I, Section 8 of the Constitution, setting a dangerous precedent that the president could unilaterally commit the nation to bloodshed without a formal declaration of war. By framing it merely as a “police action” for the United Nations, Truman fundamentally shifted the war-making power into the hands of a single, unaccountable individual.

But the usurpation of power didn’t stop at sending men to die in foreign lands; it immediately bled into domestic tyranny. In 1952, Truman issued Executive Order 10340, attempting to literally seize control of the nation’s privately owned steel mills to ensure production for his undeclared war. While the Supreme Court ultimately rebuked this specific overreach in Youngstown Sheet & Tube Co. v. Sawyer, the die was cast, proving the executive branch viewed private property as entirely subordinate to the machinery of war. Sound familiar? Think, Anthropic.

As the warfare state rolled into the jungles of Vietnam, the financial and social costs required an entirely new level of domestic subjugation. To fund an unwinnable war without sparking an open revolt through direct taxation, Richard Nixon famously closed the gold window on August 15, 1971, entirely severing the dollar’s tie to physical gold. This singular act of financial warfare against the American public ushered in the era of fiat currency, allowing the Federal Reserve to print infinite money to fund infinite wars, guaranteeing the insidious, hidden tax of inflation that continues to crush the middle class today.

Domestically, the state recognized that an awakened public was its greatest threat, prompting the FBI to launch COINTELPRO, a massive, covert operation detailed in the Church Committee Report that treated peaceful dissent, civil rights leaders, and anti-war activists as literal enemies of the state. The political establishment also moved aggressively against free speech, passing the 1965 Draft Card Mutilation Act to ensure that young men who publicly burned their draft cards to protest forced conscription could be violently thrown in cages. The message was clear: criticize the war machine, and you will be targeted, surveilled, and aggressively prosecuted.

Keep reading