Precrime: Months Before Massacre, OpenAI Worried About Canada’s Trans Mass Killer

Months before a Canadian man in a dress went on a Feb 10 rampage, killing his mother and half-brother at home before slaughtering five students and an education assistant at a secondary school where he was formerly a student, employees at OpenAI were deeply troubled by his interactions with the firm’s ChatGPT AI chatbot.   

As first reported by the Wall Street Journal, Jesse Van Rootselaar’s ChatGPT activity was flagged by the company’s automated review system. When employees took a look at what he’d been up to over a several-day period in June 2025, they were alarmed. About a dozen of them debated what they should do.

Some were convinced Van Rootselaar’s descriptions of gun-violence scenarios signaled a substantial risk of real-world bloodshed, and implored their supervisors to notify police, according to the Journal’s unnamed sources. They opted against doing so, and a spokeswoman now says they’d concluded Van Rootselaar’s posts didn’t cross the threshold of posing a credible and imminent risk of serious harm. Instead, the company decided only to ban his account. 

About seven months after his disturbing series of interactions with ChatGPT, police say he killed 8 people and injured 25 more before killing himself in the school he’d attended earlier. Van Rootselaar’s social media and YouTube accounts contained transgender symbolism as well as the online name “JessJessUwU” (a meme phrase that people may recognize from the bullet casings tied to the gay suspect charged in the assassination of Charlie Kirk). 

Keep reading

How Developers Are Making AI Your Kid’s Third Parent In The Classroom

Under Roman law a father held a legal power called patria potestas, or “total ownership,” of his children. He could sell them, deny them property, or abandon a newborn on a hillside. The child was not a person but property under the law. What a surprise then that the so-called “paternalistic” Apostle Paul upended five centuries of that system in a single verse when he wrote “Fathers do not exasperate your children; instead bring them up in the training and instruction of the Lord” (Ephesians 6:4). Roman law already demanded obedience to the father under pater familias. So Paul’s revolutionary challenge to the system was not to challenge obedience, but rather to tell the man holding absolute power he had a duty to the best interests of the child rather than himself.

Paul’s words to the Ephesians shaped Western family law for two millennia, including modern American case law (see Pierce v. Society of Sisters, 1925; Wisconsin v. Yoder, 1972). But today a different authority has moved into the space between parent and child; not a patriarch but an “aithority” — an algorithm built by the largest technology corporations on earth and dropped into American classrooms through a partnership with the teachers unions. Nobody sent a permission slip home.

The scale of “the aithority” in schools is already exasperating. In late 2025, Google announced its Gemini AI education tools had reached more than 10 million students across more than 1,000 U.S. institutions. The company rolled out more than 150 new AI features in a single year, trained more than 1 million educators for free, and embedded AI tutoring modules directly into Google Classroom. Separately, Google invested $1 billion in college-level AI integration. In June 2025 the American Federation of Teachers (AFT), the second-largest teachers union in the country, announced a partnership with OpenAI, Microsoft, and Anthropic to accelerate AI adoption in classrooms nationwide. That deal was negotiated between union leadership and three of the most powerful AI companies on earth. Parents were not at the table.

Keep reading

Farmer Hailed as Hero for Rejecting Huge Payment to Turn His Land Into a Giant Data Center

The immense hype surrounding AI has caused enormous data centers to crop up across the country, triggering significant opposition. It’s not just the loss of land: enormous power needs are pushing the grid into meltdown and driving up local electricity prices, catching the attention of politicians and their irate constituents.

One 86-year-old farmer in Cumberland County, Pennsylvania, has heard enough. As local Fox affiliate WPMT reports, Mervin Raudabaugh, who has farmed the surrounding land for more than 60 years, turned down more than $15 million from data center developers in a package deal that involved three neighboring property owners as well.

The farmer was offered $60,000 per acre to build a data center on his property. But giving up his family legacy wasn’t in the cards for him.

“I was not interested in destroying my farms,” he told WPMT. “That was the bottom line. It really wasn’t so much the economic end of it. I just didn’t want to see these two farms destroyed.”

Instead, he sold the development rights in December for just under $2 million to a conservation trust, taking a significant loss but guaranteeing that it would stay farmland in perpetuity.

Users on social media called him a “legend,” and argued he had “more integrity than the whole government.”

“Now that is a real hero in these gutless times!” another user tweeted.

“$15M is huge, but clean water, quiet land, and legacy don’t have a price tag,” another user argued.

The sheer amount of land being earmarked to construct enormous energy and water-sucking data centers is remarkable. A data center in Mount Pleasant, Wisconsin, is set to take up 600 acres, which could cost local residents their land, as ABC News reported this week. Another octogenarian farmer, the 83-year-old Tom Uttech, who has lived on his 52-acre Wisconsin property for almost 40 years, told the broadcaster that he “couldn’t believe” that a local utility company was looking to build “power lines that are 300 or something feet tall, taller than apparently the Statue of Liberty,” through his land to power the data center.

Per ABC, there are more than 3,000 data centers in the US, a number that will soon grow by 1,200 more, which are currently being constructed.

Keep reading

Starmer Announces Yet More Censorship

Even more censorship is on the way. The Government has announced plans to force AI chatbots to comply with malicious communications laws – and to give itself Orwellian powers to bring in yet more speech restrictions without Parliamentary oversight. Toby writes about the moves in the Telegraph.

The Government intends to bring forward amendments of its own to the schools Bill that will supposedly close a loophole in the Online Safety Act to make sure AI chatbots comply with Britain’s draconian censorship laws. That will mean that if Grok says something in response to a user prompt that breaches, say, the Malicious Communications Act 1988, which was designed to protect women from obscene phone calls, Ofcom can fine its parent company £18 million or 10% of its annual global turnover. Whichever is the highest.

This will be the death knell of Britain’s burgeoning AI sector, particularly as chatbots become more autonomous. What tech entrepreneur will risk setting up an AI company in the UK, knowing that if a chatbot shares an anti-immigration meme or misgenders a trans person, it could mean a swingeing fine?

Indeed, I wouldn’t be surprised if xAI, along with OpenAI and Anthropic, decide to withdraw access to their chatbots from UK residents. At the very least, we’ll be saddled with lobotomised versions that trot out progressive bromides whenever they’re asked a political question.

In addition, the Government has said it will pass a new law to stop children sending or receiving nude images. Needless to say, that’s already a criminal offence under the Protection of Children Act 1978, so what does the Government have in mind?

It has not said, but I fear it means embedding surveillance software in every smartphone to enable the authorities to monitor users’ activity, no doubt accompanied by mandatory digital ID so no one will be able to hide. Not even the People’s Republic of China does that.

The Government unveiled some other Orwellian measures, but rather than bring them in as revisions to the schools Bill, it will put through amendments that will enable it to make further changes to Britain’s censorship regime via secondary legislation, i.e., it will grant itself sweeping Henry VIII powers.

It’s worth bearing in mind that secondary legislation cannot be amended and allows little time for debate. The Government’s excessive reliance on secondary legislation has been criticised by the House of Lords Constitution Committee and the Delegated Powers and Regulatory Reform Committee.

Keep reading

What the Flock Is This: The Future of Mass Surveillance in the USA

Big Brother’s highway cameras now have AI, and they capture 6 to 12 photos of every car that goes by. Then, they all get uploaded into a huge national database, which out-of-state police and government agencies can access. They are very expensive and keep the information for 30 days.

Cameras are in 49 states.

It’s like being watched by a bunch of prison guards.

Anyone could access it publicly. Then came the lawsuits. What does this mean for mass surveillance in the United States?

Keep reading

Was It a Coincidental Traffic Stop or AI-Powered Surveillance?

Seth Ferranti was driving his Ford pickup on a southeastern Nebraska stretch of the interstate in November 2024 when law enforcement pulled him over, claiming that he had wobbled onto the hard shoulder.

As the Seward County sheriff’s deputies questioned Ferranti, a filmmaker who had spent 21 years in prison for distributing LSD, they allegedly smelled cannabis. Declaring this probable cause for a search, they searched the vehicle and discovered more than 400 pounds of marijuana.

But were those the actual reasons for the stop and search? When Ferranti went on trial, his attorneys presented a license plate reader report produced by the security communications company Motorola Solutions. It revealed Ferranti had been consistently monitored prior to his arrest, including by the local sheriff on the day he was apprehended. (Neither the sheriff’s office nor Motorola responded to Reason‘s requests for comment.)

Ferranti’s legal team argued that it was unconstitutional to surveil somebody based on his previous crimes. The argument did not carry the day: Last month their client was sentenced to up to two and a half years for possession of cannabis with intent to distribute. But the case still raises substantial moral and constitutional questions about both the scale of these public-private surveillance partnerships and the ways they’re being used.

Ferranti had long been a celebrity in the drug-reform world, going back to that LSD arrest in the early ’90s. After that first bust, he jumped bail, went on the lam, landed on the U.S. Marshals’ 15 Most Wanted Fugitives list, and even staged his own drowning to evade the authorities. After he started serving his sentence in 1993, he became a prolific prison journalist, writing the “I’m Busted” column for Vice. The New Jersey native always insisted that his crimes were nonviolent and that the drugs he sold, LSD and cannabis, had medicinal or therapeutic benefits.

After Ferranti came out of prison, his 2017 documentary White Boy—the true story of a teenage FBI informant who became a major cocaine trafficker—was a success on Netflix. He produced a number of further films, including 2023’s Secret History of the LSD Trade. And apparently, the government kept watching him.

It’s been watching a lot of people—and Motorola isn’t the only company helping it. Flock Safety was founded in 2017, and within five years it had tens of thousands of cameras operational. As the American Civil Liberties Union (ACLU) has warned, Flock’s AI-assisted automated license plate recognition (ALPR) system has been undergoing an “insidious expansion” beyond its supposed purposes of identifying vehicles of interest, such as stolen cars and hit-and-run suspects. Immigration and Customs Enforcement has used it to locate illegal migrants, and law enforcement in Texas used it to investigate a self-administered abortion, foreshadowing its potential use as a predictive policing tool for all Americans. Lee Schmidt, a veteran in Virginia, recently learned that the system logged him more than 500 times in four months. 

“I don’t know whether law enforcement officers are using [ALPRs] to do predictive policing,” says Joshua Windham of the Institute of Justice, a public interest law firm that is campaigning to stop the warrantless use of license plate reader cameras. “We know that [Customs and Border Patrol] is using ALPRs generally to stop cars with what they deem ‘suspicious’ travel patterns.”

After reviewing the document cataloguing the Ferranti’s vehicle monitoring, Windham adds: “The records are consistent with an officer either looking up a car in his system to see where else that car was captured by ALPRs, or that car showing up as a ‘hot list’ alert in the Motorola system. But it’s hard to tell, from the records alone, whether the stop was a ‘predictive policing’ stop.”

Ferranti is convinced it was. “There were no warrants, investigations, informants, state police, DEA, or FBI involvement, just Seward County Sheriff’s office [and an] AI-assisted license plate tracking service to perpetuate their outdated War on Drugs mission,” he said in an Instagram post published by his family following his sentencing. “Traveling the highways as a person with a record is now considered [suspicious] activity by the AI.”

Keep reading

AI overlords of the world hacked: Fallout from the massive Palantir breach

Palantir Technologies has been hacked, according to well-known blogger Kim Dotcom. The company develops software for intelligence and big data analysis. 

Palantir (named after the magical ‘seeing stones’ from ‘The Lord of the Rings’) doesn’t engage in surveillance in the conventional sense using spies, cameras, or bugs. Instead, it develops software that is sold to government agencies, military organizations, and large corporations.

Clients (like the CIA or the German police) upload all their data, and Palantir (its primary platforms are Gotham for military purposes and Foundry for business) then utilizes AI to transform this chaotic information into a coherent picture.

Essentially, it creates a ‘digital twin’ of reality, revealing connections that analysts could have never recognized on their own: for example, that a terrorist had called the cousin of someone who recently transferred money to a suspicious account.

The claims about wiretapping Trump and Musk are likely untrue or highly exaggerated. However, there’s no doubt that Palantir serves as a massive surveillance mechanism for monitoring America’s adversaries (and not only). It is an “operating system for war and intelligence,” providing agencies with a supercomputer that can see everything. But it’s the agencies themselves that feed this computer with data.

Keep reading

Disney Sends Cease-and-Desist Letter to Chinese ByteDance To Stop Use of Copyrighted Characters in New Seedance 2.0 AI Video Generator

ByteDance reportedly is vowing to comply.

The release of the revolutionary Seedance 2.0 AI video generator has sent Hollywood into a frenzy, with the Motion Picture Association and several class organizations firing on all cylinders against the Chinese company ByteDance.

Now, Disney has gone one step further and sent a cease-and-desist letter to ByteDance, accusing it of ‘using Disney characters to train and power the AI generator without permission’.

Reuters reported:

“Disney said ByteDance had pre‑packaged Seedance with a pirated library of copyrighted characters from franchises including Star Wars and Marvel, portraying them as if they were public-domain clip art, the person said.

The letter alleges Seedance is reproducing, distributing and creating derivative works featuring Spider-Man, Darth Vader, and other characters, the person added.”

Keep reading

Macron’s AI Clown Show: Europe’s Digital Dilemma

The European Union has lost its place in the global race for artificial intelligence. In a single tweet on platform X, France’s President Emmanuel Macron inadvertently outlined the convoluted situation while simultaneously revealing his personal emotional fragility.

The leading representatives of the European Union like to present themselves as emotionless technocrats. Maintaining the greatest possible distance from citizens, they execute their agenda of societal transformation toward what they understand as a net-zero transformation economy. 

This ostentatious distance from the citizenry acts as a simulacrum of power, which, in politicians like Emmanuel Macron, often veers into the caricatural.

Macron’s striking presence in foreign affairs—whether regarding the Ukraine war or recurring provocations toward the United States—correlates with his aggressive censorship policy toward his own population. A president without a people, steering his minority government through a budgetary crisis that brings France ever closer to the fiscal abyss.

In Macron’s persona, the European misstep is condensed: economically failed, deeply unpopular among his own people, geopolitically essentially irrelevant—and yet imbued with lofty, messianic plans. 

This performative play of power, coupled with hardly disguised impotence and incompetence, inevitably produces an effect that can be described as clownish. It is the expression of a political style that can no longer reconcile claim with reality—and thus delivers less leadership than a tragicomic performance.

Keep reading

Ring Cancels Flock Safety Integration After Public Backlash

Public backlash has forced Ring to cancel its partnership with Flock Safety, the law enforcement surveillance company whose camera network has reportedly given ICE and other federal agencies access to footage across the country.

Ring announced the cancellation this week, saying the integration never went live.

The company’s statement was careful:

“Following a comprehensive review, we determined the planned Flock Safety integration would require significantly more time and resources than anticipated. We therefore made the joint decision to cancel the integration and continue with our current partners…The integration never launched, so no Ring customer videos were ever sent to Flock Safety.”

That last sentence is doing a lot of work. Ring users responding to the Flock announcement went further than strongly worded tweets. People smashed cameras. Others announced publicly that they were throwing their devices away. The Amazon-owned company had badly misread the moment.

Flock Safety is a surveillance technology company that operates a nationwide network of AI-powered cameras, primarily known for license plate readers, and sells access to the resulting database of vehicle movements to roughly 5,000 law enforcement agencies across the United States.

The Flock partnership was announced back in October 2025, and you may remember the feature report How Amazon Is Turning Your Neighborhood Into a Police Database, which gave deeper insight into the plans.

It got pushback at the time, but only became a bigger crisis after the recent outrage some cities have shown to ICE enforcement activity, when social media posts claimed Ring was providing a direct pipeline through Flock to ICE.

That specific claim isn’t accurate, since the Flock connection never went live. But Ring’s broader relationship with the police is real and extensive, which gave the fear enough traction to land.

Keep reading