Ben Affleck Once Criticized AI, Now Netflix Is Buying His AI Startup For $600 Million

Ben Affleck—who has previously warned about the risks artificial intelligence poses to Hollywood—has sold his own AI filmmaking startup to Netflix in a deal that could reach $600 million, according to Bloomberg.

The cash portion of the acquisition is smaller, with additional payments tied to performance targets, but it still ranks among the largest AI-focused deals by a major studio.

The startup, InterPositive, developed software designed to help directors edit footage after filming, such as removing stray objects or changing elements in the background. The tools are intended to work with existing film rather than generate entirely new content. Director David Fincher has already used the technology on an upcoming movie starring Brad Pitt.

Netflix’s purchase highlights how studios are increasingly turning to artificial intelligence to streamline production and reduce costs. Rivals such as Amazon and The Walt Disney Company are also exploring AI tools for film and television development.

Bloomberg writes that Affleck built InterPositive with backing from RedBird Capital Partners and initially kept the project quiet before seeking investors in 2025. He has argued the technology should function as a controlled filmmaking aid: the system trains only on footage from a specific film and doesn’t scrape outside movies or generate new works independently.

Keep reading

Drone Wave Hits Iran Streets, Hundreds Kill Regime Members Individually In Dystopian New Form Of Warfare

Reports from Iran say Israeli drones are now hunting Basij and Revolutionary Guard Corps checkpoints in the streets across Iran, in what appears to be a wave involving hundreds of drones.

The apparent goal is to clear the streets of the regime’s repression forces and allow opponents of the regime to come out, reports Israeli Live News.

Reports from Iran say drones and UAVs are exploding on motorcycles and vehicles, with dozens of Basij forces reportedly killed at checkpoints, bases, police stations and regime gathering points.

This is being described as Iran’s version of the pager attacks.

Keep reading

AI Won’t Fix America’s Looming Debt Crisis

Last month, Congress sparred with the president over a partial budget, but with few real cuts, America’s slow march toward an epic debt crisis went on undeterred. With over $38 trillion in debt and interest payments exceeding defense or Medicare spending, one would expect lawmakers to confront reality and do the difficult work needed to restore fiscal sanity. But why would they? Cutting entitlements and increasing middle-class taxes rarely make for winning campaign slogans.

It’s no surprise, then, that some prefer to pin their hopes on AI as America’s fiscal savior. Vanguard’s chief economist Joe Davis argued there’s as high as a 50 percent chance AI will prevent a debt-driven economic malaise. Elon Musk voiced a similar conclusion late last year, claiming AI and robotics are “the only thing that’s going to solve the US debt crisis.”

The argument goes like this: an AI boom drives explosive economic growth and tax revenue, while, at the same time, productivity gains impressively offset any upward pressure on interest rates. The deficit becomes a surplus and the overall debt shrinks, possibly disappearing entirely.

If that sounds less like a policy plan and more like a retirement strategy built around winning the lottery, you’re not wrong. The entire scenario hinges on a massive if: that AI generates extraordinary revenue and does it quickly enough to outrun rising interest costs.

But even if the government hits the tax revenue jackpot before Congress drives us off a fiscal cliff, it would be naïve to assume lawmakers would pay down the debt. 

The More the Government Gets, the More the Government Spends

For the sake of argument, suppose the tech optimists are right, and the federal government enjoys a massive AI-driven revenue windfall. Understanding what happens next requires understanding the incentives of politicians and their voters.

This is where public choice shines. Rather than assuming politicians and voters act in everyone’s best interest, this branch of economics recognizes that people don’t become angels once they interface with the government. Incentives matter, especially for politicians.

Incentives are why we have a deficit in the first place. The public isn’t particularly interested in financial restraint because high spending and low taxes benefit them now, and the resulting debt is some future generation’s problem. Politicians surely see the crisis brewing, but solving it is a sure way to get voted out of office. And so the incentive is to run constant deficits and grow the debt year after year, decade after decade.

Without changing incentives, it will be hard to avoid spending new revenue. Ballooning coffers mean voters will demand that the government dole out more goodies (especially if AI displaces workers along the way). Washington already excels at entertaining expensive ideas: healthcare subsidies for well-off families, a universal basic income, generous tax cuts, a fifty-percent increase in military spending, all despite the pushback the current deficit’s able to muster. Imagine the wish list after it drops even a little.

Keep reading

Four Military Drones Stolen from Fort Campbell in Kentucky

Four military drones were stolen from Fort Campbell in Kentucky.

In a post on the U.S. Army Fort Campbell Facebook Page, a spokesperson wrote that four Skydio X10D Drone Systems were stolen from the 326th Division Engineer Battalion building.

The Department of the Army Criminal Investigation Division is offering a reward for information that leads to the conviction of those behind the theft.

The drones were originally stolen in November of last year, but Fort Campbell released information and surveillance photos to the public on March 11.

Per WKNY:

The U.S. Army Fort Campbell is reporting the theft of four drone systems, and it needs your help to locate the suspects.

According to a social media post by the U.S. Army Fort Campbell, the Department of the Army Criminal Investigation Division is offering $5,000 for information leading to the arrest and conviction of those responsible for the theft.

Fort Campbell states the theft involved four Skydio X10D drone systems.

The drone systems were last seen on November 21, 2025 at the 326th Division Engineer Battalion at Building 6955 on A Shau Valley Road in Fort Campbell, according to the post.

Between November 21-24, 2025, Fort Campbell states unknown individuals unlawfully accessed the building and took the drones.

The Skydio X10D is an unmanned aerial drone designed with a modular payload capability.

Keep reading

OpenAI on Surveillance and Autonomous Killings: You’re Going to Have to Trust Us

OpenAI claims it has accomplished what Anthropic couldn’t: securing a Pentagon contract that won’t cross professed red lines against dragnet domestic spying and the use of artificial intelligence to order lethal military strikes. Just don’t expect any proof.

Sam Altman, OpenAI’s CEO, announced the company’s big win with the Defense Department in a post on X on February 27.

“Two of our most important safety principles are prohibitions on domestic mass surveillance and human responsibility for the use of force, including for autonomous weapon systems,” he wrote. The Pentagon “agrees with these principles, reflects them in law and policy, and we put them into our agreement.”

The deal came after the very public implosion of what was to be a similar contract between the U.S. military and Anthropic, one of OpenAI’s chief rivals. Anthropic had said negotiations collapsed because it could not enshrine prohibitions against killer robots and domestic spying in its contract. The company’s insistence on these two points earned it the wrath of the Pentagon and President Donald Trump, who ordered the government to phase out use of Anthropic’s tools within six months.

But if the government booted Anthropic for refusing mass surveillance and autonomous weapons, how could OpenAI take over the contract without having the same problem?

OpenAI has attempted to square this circle through a string of posts to X by company executives and researchers, including Katrina Mulligan, its national security chief, and a claim by Altman that the company negotiated stricter protections around domestic surveillance.

The company and the government, however, are not releasing the only proof that matters: the contract itself.

The Department of Defense did not respond to a request for comment.

Keep reading

Online age-verification tools spread across U.S. for child safety, but adults are being surveilled

New U.S laws designed to protect minors are pulling millions of adult Americans into mandatory age-verification gates to access online content, leading to backlash from users and criticism from privacy advocates that a free and open internet is at stake. Roughly half of U.S. states have enacted or are advancing laws requiring platforms — including adult content sites, online gaming services, and social media apps — to block underage users, forcing companies to screen everyone who approaches these digital gates.

“There’s a big spectrum,” said Joe Kaufmann, global head of privacy at Jumio, one of the largest digital identity-verification and authentication platforms. He explained that the patchwork of state laws vary in technical demands and compliance expectations. “The regulations are moving in many different directions at once,” he said.  

Social media company Discord announced plans in February to roll out mandatory age verification globally, which the company said would rely on verification methods designed so facial analysis occurs on a user’s device and submitted data would be deleted immediately. The proposal quickly drew backlash from users concerned about having to submit selfies or government IDs to access certain features, which led Discord to delay the launch until the second half of this year.

“Let me be upfront: we knew this rollout was going to be controversial. Any time you introduce something that touches identity and verification, people are going to have strong feelings,” Discord chief technology officer and co-founder Stanislav Vishnevskiy wrote in a Feb. 24 blog post.

Websites offering adult content, gambling, or financial services often rely on full identity verification that requires scanning a government ID and matching it to a live image. But most of the verification systems powering these checkpoints — often run by specialized identity-verification vendors on behalf of websites — rely on artificial intelligence such as facial recognition and age-estimation models that analyze selfies or video to determine in seconds whether someone is old enough to access content. Social media and lower-risk services may use lighter estimation tools designed to confirm age without permanently storing detailed identity records.  

Vendors say a challenge is balancing safety with how much friction users will tolerate. “We’re in the business of ensuring that you are absolutely keeping minors safe and out and able to let adults in with as little friction as possible,” said Rivka Gewirtz Little, chief growth officer at identity-verification platform Socure. Excessive data collection, she added, creates friction that users resist. 

Keep reading

UK Lords Back Facial Recognition Overreach, Protest Crackdown Powers

The UK Lords spent March 9 dismantling what little legal cover existed for anonymous protest and privacy, and building new tools to suppress it entirely.

Start with what they refused to protect. Peers voted down an amendment that would have kept the DVLA database (the equivalent of the DMV in the US) out of live facial recognition searches.

That database isn’t a surveillance archive. It was built to verify driving licenses. It contains photographs linked to the confirmed real-world identities of most UK drivers, and the Lords just cleared the path for police to run it against faces captured in real time at public gatherings. A licensing bureaucracy would become an identification engine. The repurposing happened quietly, through a vote most people won’t read about.

The Lords also voted down a proposed “defence of reasonable excuse” for concealing identity at protests. The amendment would have shifted the burden of proof onto police officers to justify why a face covering made someone arrestable.

Keep reading

Incinerated Children and Decimated Liberty: How the US War Machine Slaughters Foreigners to Build a Domestic Panopticon

Randolph Bourne famously penned that war is the health of the State, a grim reality that has haunted the American populace for over eight decades. We are taught in state-funded schools that the military goes abroad to fight for our freedoms, yet every single conflict since the end of the Second World War has been explicitly used as a mechanism to systematically dismantle the liberties of the domestic population. Iran is no different, and in fact, may be much worse.

Foreign emergencies are consistently the Trojan horses used by the ruling class to bypass constitutional constraints, normalize mass surveillance, and entirely erode the principles of liberty right here at home.

The blueprint for the modern imperial presidency was drafted during the Korean conflict, a war that permanently altered the relationship between the executive branch and the limits of power. When Harry Truman decided to intervene in Korea, he completely bypassed Congress and Article I, Section 8 of the Constitution, setting a dangerous precedent that the president could unilaterally commit the nation to bloodshed without a formal declaration of war. By framing it merely as a “police action” for the United Nations, Truman fundamentally shifted the war-making power into the hands of a single, unaccountable individual.

But the usurpation of power didn’t stop at sending men to die in foreign lands; it immediately bled into domestic tyranny. In 1952, Truman issued Executive Order 10340, attempting to literally seize control of the nation’s privately owned steel mills to ensure production for his undeclared war. While the Supreme Court ultimately rebuked this specific overreach in Youngstown Sheet & Tube Co. v. Sawyer, the die was cast, proving the executive branch viewed private property as entirely subordinate to the machinery of war. Sound familiar? Think, Anthropic.

As the warfare state rolled into the jungles of Vietnam, the financial and social costs required an entirely new level of domestic subjugation. To fund an unwinnable war without sparking an open revolt through direct taxation, Richard Nixon famously closed the gold window on August 15, 1971, entirely severing the dollar’s tie to physical gold. This singular act of financial warfare against the American public ushered in the era of fiat currency, allowing the Federal Reserve to print infinite money to fund infinite wars, guaranteeing the insidious, hidden tax of inflation that continues to crush the middle class today.

Domestically, the state recognized that an awakened public was its greatest threat, prompting the FBI to launch COINTELPRO, a massive, covert operation detailed in the Church Committee Report that treated peaceful dissent, civil rights leaders, and anti-war activists as literal enemies of the state. The political establishment also moved aggressively against free speech, passing the 1965 Draft Card Mutilation Act to ensure that young men who publicly burned their draft cards to protest forced conscription could be violently thrown in cages. The message was clear: criticize the war machine, and you will be targeted, surveilled, and aggressively prosecuted.

Keep reading

How Grok’s Football Roasts Put X in the Crosshairs of Britain’s Online Censorship Law

Few subjects in Britain carry as much emotional weight as football. Club loyalty runs deep, tragedies remain painfully close to the surface, and rivalries often cross the line between banter and cruelty. That volatile mix resurfaced this week when Grok, the AI chatbot on X, generated what officials described as “vulgar roasts” after users explicitly prompted it to produce offensive material.

UK authorities reacted quickly, discussing the Online Safety Act, Britain’s new censorship law, and raising the possibility of serious financial penalties for X. Under the law, platforms can face fines reaching up to ten percent of global revenue if they fail to address harmful content.

The material dredged up some of the most painful chapters in English football history. It mocked the Hillsborough disaster, where 97 Liverpool supporters were crushed to death at an FA Cup semi-final in Sheffield after police failures led to fatal overcrowding in a standing pen.

It also referenced the Munich air disaster, which killed 23 people, including eight Manchester United players, when the team’s aircraft crashed during takeoff in icy conditions. Grok further alluded to the recent death of Diogo Jota, who died in a car accident in Spain in June 2025 at the age of 28 while playing for Liverpool F.C.

Keep reading

Why Are All AI Models Left Wing?

Ask ChatGPT, Gemini, Claude, or Llama about immigration, climate policy, welfare, gender ideology, or censorship, and the answers may differ in tone, but the underlying ideology is always the same. Multiple studies now find that leading language models lean left on contested political questions, often favouring progressive social assumptions and more interventionist economic positions. Researchers in Germany found strong alignment with left-wing parties across major models. Another study found instruction-tuned models were generally more left-leaning. A third concluded that larger models often become more politically skewed, not less. That is a serious problem for a technology sold as an impartial guide to information. If the tools increasingly used to explain the world already tilt in one direction, the question is no longer whether bias exists, but how far it shapes what millions of users come to regard as neutral truth.

It’s Not Just a Theory Anymore

For years, concerns about political bias in AI were brushed aside as anecdotal. That argument has weakened sharply. A 2025 study examining AI-based voting advice tools and large language models ahead of Germany’s federal election found that the models showed strong alignment, averaging more than 75 per cent, with left-wing parties, while their alignment with centre-right parties was below 50 per cent and with right-wing parties around 30 per cent. The authors warned that systems presented as neutral informational tools were in fact producing substantially biased outputs.

Another 2025 paper testing popular models against Germany’s Wahl-O-Mat framework reached a similar conclusion. It found a bias towards left-leaning parties and reported that this tendency was most dominant in larger models. The study’s title was blunt enough on its own: Large Means Left.

A separate theory-grounded analysis based on 88,110 responses across 11 commercial and open models found that political bias measures can vary by prompt, but that instruction-tuned systems were generally more left-leaning. The important point is not that every model behaves identically. It is that the overall pattern keeps recurring across methods, datasets, and research teams.

Keep reading