The Dark Side of AI: Innocent Grandmother Wrongfully Jailed for 6 Months After Facial Recognition Error

A Tennessee grandmother spent nearly six months behind bars in North Dakota, a state she had never even stepped foot in, after being wrongfully identified by AI facial recognition technology in a bank fraud investigation.

The Grand Forks Herald reports that Angela Lipps, a 50-year-old mother of three and grandmother of five from Tennessee, found herself trapped in a nightmare that began last July when U.S. Marshals arrested her at gunpoint while she was babysitting four young children. Fargo police had used facial recognition software to identify her as the primary suspect in an organized bank fraud case, despite the fact that she had never set foot in North Dakota.

The case began in April and May 2025 when Fargo Police Department detectives investigated several bank fraud incidents. Surveillance footage captured a woman using a fraudulent U.S. Army military identification card to withdraw tens of thousands of dollars from local banks. To identify the suspect, investigators employed facial recognition software, which incorrectly matched the woman in the videos to Lipps.

According to court documents obtained through an open records request, the detective assigned to the case reviewed Lipps’ social media accounts and Tennessee driver’s license photo after receiving the facial recognition match. In the charging document, the detective stated that Lipps appeared to be the suspect based on facial features, body type, hairstyle, and hair color. Notably, no one from the Fargo Police Department contacted Lipps to question her before filing charges.

Lipps was arrested on July 14 and booked into her county jail in Tennessee as a fugitive from justice. She faced four counts of unauthorized use of personal identifying information and four counts of theft in North Dakota. Held without bail due to her fugitive status, Lipps spent 108 days in the Tennessee jail before North Dakota officers transported her to Fargo on October 30.

“It was so scary, I can still see it in my head, over and over again,” Lipps said during an interview about her ordeal.

Keep reading

AI Can Now Unmask Anonymous Internet Users, New Study Finds

It looks like AI can now unmask any anonymous account on the internet. That’s according to a new study by Simon Lermen (MATS), Daniel Paleka (ETH Zurich), Joshua Swanson (ETH Zurich), Michael Aerni (ETH Zurich), Nicholas Carlini (Anthropic), and Florian Tramèr (ETH Zurich), published on arXiv.

In the paper, “Large-Scale Online Deanonymization with LLMs,” the researchers show that modern large language models (LLMs) can re-identify people behind pseudonymous online accounts at a scale and accuracy that far surpass previous techniques.

The core contribution is an automated deanonymization pipeline powered by LLMs, according to the new study. Instead of relying on structured datasets or hand-engineered features—like earlier attacks on the Netflix Prize dataset—the system works directly on raw, unstructured text.

Given posts, comments, or interview transcripts written under a pseudonym, the pipeline extracts identity-relevant signals, searches for likely matches using semantic embeddings, and then uses higher-level reasoning to verify the most promising candidates while filtering out false positives. The result is a scalable attack that mirrors—and in some cases exceeds—the effectiveness of a dedicated human investigator.

To evaluate their approach, the researchers constructed three datasets with known ground truth. The first links pseudonymous Hacker News users to real-world LinkedIn profiles, relying on cross-platform clues embedded in public text. The second matches users across movie discussion communities on Reddit. The third takes a single Reddit user’s history, splits it into two time-separated profiles, and tests whether the system can reconnect them.

Across all three settings, LLM-based methods dramatically outperformed classical baselines, which often achieved near-zero recall.

The headline numbers are striking. In some experiments, the system achieved up to 68% recall at 90% precision—meaning it correctly identified a substantial portion of targets while keeping false accusations low. Even when matching temporally split Reddit accounts separated by a year, performance remained strong. In contrast, traditional non-LLM approaches struggled to produce meaningful matches. The findings suggest that advances in reasoning and representation learning have transformed deanonymization from a niche, data-hungry attack into a broadly applicable capability.

Keep reading

Palantir CEO Says AI Will Take Power Away From Democratic Voters and Toward Working-Class Men

Palantir CEO Alex Karp has said that artificial intelligence (AI) could shift economic influence away from highly educated voters who tend to support Democrats and toward vocationally trained, working-class men.

In an interview with CNBC, Karp discussed the broader societal impact of artificial intelligence and how it is expected to transform employment.

“This technology disrupts humanities-trained, largely Democratic voters, and makes their economic power less.”

“And increases the economic power of vocationally trained, working-class, often male, uh voters,” Karp said.

“So these disruptions are gonna disrupt every aspect of our society,” he said.

“To make this work, we have to come to an agreement of what it is we’re going to do with the technology; how are we gonna explain to people who are likely gonna have less good, and less interesting jobs.”

Keep reading

Ben Affleck Once Criticized AI, Now Netflix Is Buying His AI Startup For $600 Million

Ben Affleck—who has previously warned about the risks artificial intelligence poses to Hollywood—has sold his own AI filmmaking startup to Netflix in a deal that could reach $600 million, according to Bloomberg.

The cash portion of the acquisition is smaller, with additional payments tied to performance targets, but it still ranks among the largest AI-focused deals by a major studio.

The startup, InterPositive, developed software designed to help directors edit footage after filming, such as removing stray objects or changing elements in the background. The tools are intended to work with existing film rather than generate entirely new content. Director David Fincher has already used the technology on an upcoming movie starring Brad Pitt.

Netflix’s purchase highlights how studios are increasingly turning to artificial intelligence to streamline production and reduce costs. Rivals such as Amazon and The Walt Disney Company are also exploring AI tools for film and television development.

Bloomberg writes that Affleck built InterPositive with backing from RedBird Capital Partners and initially kept the project quiet before seeking investors in 2025. He has argued the technology should function as a controlled filmmaking aid: the system trains only on footage from a specific film and doesn’t scrape outside movies or generate new works independently.

Keep reading

AI Won’t Fix America’s Looming Debt Crisis

Last month, Congress sparred with the president over a partial budget, but with few real cuts, America’s slow march toward an epic debt crisis went on undeterred. With over $38 trillion in debt and interest payments exceeding defense or Medicare spending, one would expect lawmakers to confront reality and do the difficult work needed to restore fiscal sanity. But why would they? Cutting entitlements and increasing middle-class taxes rarely make for winning campaign slogans.

It’s no surprise, then, that some prefer to pin their hopes on AI as America’s fiscal savior. Vanguard’s chief economist Joe Davis argued there’s as high as a 50 percent chance AI will prevent a debt-driven economic malaise. Elon Musk voiced a similar conclusion late last year, claiming AI and robotics are “the only thing that’s going to solve the US debt crisis.”

The argument goes like this: an AI boom drives explosive economic growth and tax revenue, while, at the same time, productivity gains impressively offset any upward pressure on interest rates. The deficit becomes a surplus and the overall debt shrinks, possibly disappearing entirely.

If that sounds less like a policy plan and more like a retirement strategy built around winning the lottery, you’re not wrong. The entire scenario hinges on a massive if: that AI generates extraordinary revenue and does it quickly enough to outrun rising interest costs.

But even if the government hits the tax revenue jackpot before Congress drives us off a fiscal cliff, it would be naïve to assume lawmakers would pay down the debt. 

The More the Government Gets, the More the Government Spends

For the sake of argument, suppose the tech optimists are right, and the federal government enjoys a massive AI-driven revenue windfall. Understanding what happens next requires understanding the incentives of politicians and their voters.

This is where public choice shines. Rather than assuming politicians and voters act in everyone’s best interest, this branch of economics recognizes that people don’t become angels once they interface with the government. Incentives matter, especially for politicians.

Incentives are why we have a deficit in the first place. The public isn’t particularly interested in financial restraint because high spending and low taxes benefit them now, and the resulting debt is some future generation’s problem. Politicians surely see the crisis brewing, but solving it is a sure way to get voted out of office. And so the incentive is to run constant deficits and grow the debt year after year, decade after decade.

Without changing incentives, it will be hard to avoid spending new revenue. Ballooning coffers mean voters will demand that the government dole out more goodies (especially if AI displaces workers along the way). Washington already excels at entertaining expensive ideas: healthcare subsidies for well-off families, a universal basic income, generous tax cuts, a fifty-percent increase in military spending, all despite the pushback the current deficit’s able to muster. Imagine the wish list after it drops even a little.

Keep reading

OpenAI on Surveillance and Autonomous Killings: You’re Going to Have to Trust Us

OpenAI claims it has accomplished what Anthropic couldn’t: securing a Pentagon contract that won’t cross professed red lines against dragnet domestic spying and the use of artificial intelligence to order lethal military strikes. Just don’t expect any proof.

Sam Altman, OpenAI’s CEO, announced the company’s big win with the Defense Department in a post on X on February 27.

“Two of our most important safety principles are prohibitions on domestic mass surveillance and human responsibility for the use of force, including for autonomous weapon systems,” he wrote. The Pentagon “agrees with these principles, reflects them in law and policy, and we put them into our agreement.”

The deal came after the very public implosion of what was to be a similar contract between the U.S. military and Anthropic, one of OpenAI’s chief rivals. Anthropic had said negotiations collapsed because it could not enshrine prohibitions against killer robots and domestic spying in its contract. The company’s insistence on these two points earned it the wrath of the Pentagon and President Donald Trump, who ordered the government to phase out use of Anthropic’s tools within six months.

But if the government booted Anthropic for refusing mass surveillance and autonomous weapons, how could OpenAI take over the contract without having the same problem?

OpenAI has attempted to square this circle through a string of posts to X by company executives and researchers, including Katrina Mulligan, its national security chief, and a claim by Altman that the company negotiated stricter protections around domestic surveillance.

The company and the government, however, are not releasing the only proof that matters: the contract itself.

The Department of Defense did not respond to a request for comment.

Keep reading

Online age-verification tools spread across U.S. for child safety, but adults are being surveilled

New U.S laws designed to protect minors are pulling millions of adult Americans into mandatory age-verification gates to access online content, leading to backlash from users and criticism from privacy advocates that a free and open internet is at stake. Roughly half of U.S. states have enacted or are advancing laws requiring platforms — including adult content sites, online gaming services, and social media apps — to block underage users, forcing companies to screen everyone who approaches these digital gates.

“There’s a big spectrum,” said Joe Kaufmann, global head of privacy at Jumio, one of the largest digital identity-verification and authentication platforms. He explained that the patchwork of state laws vary in technical demands and compliance expectations. “The regulations are moving in many different directions at once,” he said.  

Social media company Discord announced plans in February to roll out mandatory age verification globally, which the company said would rely on verification methods designed so facial analysis occurs on a user’s device and submitted data would be deleted immediately. The proposal quickly drew backlash from users concerned about having to submit selfies or government IDs to access certain features, which led Discord to delay the launch until the second half of this year.

“Let me be upfront: we knew this rollout was going to be controversial. Any time you introduce something that touches identity and verification, people are going to have strong feelings,” Discord chief technology officer and co-founder Stanislav Vishnevskiy wrote in a Feb. 24 blog post.

Websites offering adult content, gambling, or financial services often rely on full identity verification that requires scanning a government ID and matching it to a live image. But most of the verification systems powering these checkpoints — often run by specialized identity-verification vendors on behalf of websites — rely on artificial intelligence such as facial recognition and age-estimation models that analyze selfies or video to determine in seconds whether someone is old enough to access content. Social media and lower-risk services may use lighter estimation tools designed to confirm age without permanently storing detailed identity records.  

Vendors say a challenge is balancing safety with how much friction users will tolerate. “We’re in the business of ensuring that you are absolutely keeping minors safe and out and able to let adults in with as little friction as possible,” said Rivka Gewirtz Little, chief growth officer at identity-verification platform Socure. Excessive data collection, she added, creates friction that users resist. 

Keep reading

Incinerated Children and Decimated Liberty: How the US War Machine Slaughters Foreigners to Build a Domestic Panopticon

Randolph Bourne famously penned that war is the health of the State, a grim reality that has haunted the American populace for over eight decades. We are taught in state-funded schools that the military goes abroad to fight for our freedoms, yet every single conflict since the end of the Second World War has been explicitly used as a mechanism to systematically dismantle the liberties of the domestic population. Iran is no different, and in fact, may be much worse.

Foreign emergencies are consistently the Trojan horses used by the ruling class to bypass constitutional constraints, normalize mass surveillance, and entirely erode the principles of liberty right here at home.

The blueprint for the modern imperial presidency was drafted during the Korean conflict, a war that permanently altered the relationship between the executive branch and the limits of power. When Harry Truman decided to intervene in Korea, he completely bypassed Congress and Article I, Section 8 of the Constitution, setting a dangerous precedent that the president could unilaterally commit the nation to bloodshed without a formal declaration of war. By framing it merely as a “police action” for the United Nations, Truman fundamentally shifted the war-making power into the hands of a single, unaccountable individual.

But the usurpation of power didn’t stop at sending men to die in foreign lands; it immediately bled into domestic tyranny. In 1952, Truman issued Executive Order 10340, attempting to literally seize control of the nation’s privately owned steel mills to ensure production for his undeclared war. While the Supreme Court ultimately rebuked this specific overreach in Youngstown Sheet & Tube Co. v. Sawyer, the die was cast, proving the executive branch viewed private property as entirely subordinate to the machinery of war. Sound familiar? Think, Anthropic.

As the warfare state rolled into the jungles of Vietnam, the financial and social costs required an entirely new level of domestic subjugation. To fund an unwinnable war without sparking an open revolt through direct taxation, Richard Nixon famously closed the gold window on August 15, 1971, entirely severing the dollar’s tie to physical gold. This singular act of financial warfare against the American public ushered in the era of fiat currency, allowing the Federal Reserve to print infinite money to fund infinite wars, guaranteeing the insidious, hidden tax of inflation that continues to crush the middle class today.

Domestically, the state recognized that an awakened public was its greatest threat, prompting the FBI to launch COINTELPRO, a massive, covert operation detailed in the Church Committee Report that treated peaceful dissent, civil rights leaders, and anti-war activists as literal enemies of the state. The political establishment also moved aggressively against free speech, passing the 1965 Draft Card Mutilation Act to ensure that young men who publicly burned their draft cards to protest forced conscription could be violently thrown in cages. The message was clear: criticize the war machine, and you will be targeted, surveilled, and aggressively prosecuted.

Keep reading

How Grok’s Football Roasts Put X in the Crosshairs of Britain’s Online Censorship Law

Few subjects in Britain carry as much emotional weight as football. Club loyalty runs deep, tragedies remain painfully close to the surface, and rivalries often cross the line between banter and cruelty. That volatile mix resurfaced this week when Grok, the AI chatbot on X, generated what officials described as “vulgar roasts” after users explicitly prompted it to produce offensive material.

UK authorities reacted quickly, discussing the Online Safety Act, Britain’s new censorship law, and raising the possibility of serious financial penalties for X. Under the law, platforms can face fines reaching up to ten percent of global revenue if they fail to address harmful content.

The material dredged up some of the most painful chapters in English football history. It mocked the Hillsborough disaster, where 97 Liverpool supporters were crushed to death at an FA Cup semi-final in Sheffield after police failures led to fatal overcrowding in a standing pen.

It also referenced the Munich air disaster, which killed 23 people, including eight Manchester United players, when the team’s aircraft crashed during takeoff in icy conditions. Grok further alluded to the recent death of Diogo Jota, who died in a car accident in Spain in June 2025 at the age of 28 while playing for Liverpool F.C.

Keep reading

Why Are All AI Models Left Wing?

Ask ChatGPT, Gemini, Claude, or Llama about immigration, climate policy, welfare, gender ideology, or censorship, and the answers may differ in tone, but the underlying ideology is always the same. Multiple studies now find that leading language models lean left on contested political questions, often favouring progressive social assumptions and more interventionist economic positions. Researchers in Germany found strong alignment with left-wing parties across major models. Another study found instruction-tuned models were generally more left-leaning. A third concluded that larger models often become more politically skewed, not less. That is a serious problem for a technology sold as an impartial guide to information. If the tools increasingly used to explain the world already tilt in one direction, the question is no longer whether bias exists, but how far it shapes what millions of users come to regard as neutral truth.

It’s Not Just a Theory Anymore

For years, concerns about political bias in AI were brushed aside as anecdotal. That argument has weakened sharply. A 2025 study examining AI-based voting advice tools and large language models ahead of Germany’s federal election found that the models showed strong alignment, averaging more than 75 per cent, with left-wing parties, while their alignment with centre-right parties was below 50 per cent and with right-wing parties around 30 per cent. The authors warned that systems presented as neutral informational tools were in fact producing substantially biased outputs.

Another 2025 paper testing popular models against Germany’s Wahl-O-Mat framework reached a similar conclusion. It found a bias towards left-leaning parties and reported that this tendency was most dominant in larger models. The study’s title was blunt enough on its own: Large Means Left.

A separate theory-grounded analysis based on 88,110 responses across 11 commercial and open models found that political bias measures can vary by prompt, but that instruction-tuned systems were generally more left-leaning. The important point is not that every model behaves identically. It is that the overall pattern keeps recurring across methods, datasets, and research teams.

Keep reading