Precrime: Months Before Massacre, OpenAI Worried About Canada’s Trans Mass Killer

Months before a Canadian man in a dress went on a Feb 10 rampage, killing his mother and half-brother at home before slaughtering five students and an education assistant at a secondary school where he was formerly a student, employees at OpenAI were deeply troubled by his interactions with the firm’s ChatGPT AI chatbot.   

As first reported by the Wall Street Journal, Jesse Van Rootselaar’s ChatGPT activity was flagged by the company’s automated review system. When employees took a look at what he’d been up to over a several-day period in June 2025, they were alarmed. About a dozen of them debated what they should do.

Some were convinced Van Rootselaar’s descriptions of gun-violence scenarios signaled a substantial risk of real-world bloodshed, and implored their supervisors to notify police, according to the Journal’s unnamed sources. They opted against doing so, and a spokeswoman now says they’d concluded Van Rootselaar’s posts didn’t cross the threshold of posing a credible and imminent risk of serious harm. Instead, the company decided only to ban his account. 

About seven months after his disturbing series of interactions with ChatGPT, police say he killed 8 people and injured 25 more before killing himself in the school he’d attended earlier. Van Rootselaar’s social media and YouTube accounts contained transgender symbolism as well as the online name “JessJessUwU” (a meme phrase that people may recognize from the bullet casings tied to the gay suspect charged in the assassination of Charlie Kirk). 

Keep reading

How Developers Are Making AI Your Kid’s Third Parent In The Classroom

Under Roman law a father held a legal power called patria potestas, or “total ownership,” of his children. He could sell them, deny them property, or abandon a newborn on a hillside. The child was not a person but property under the law. What a surprise then that the so-called “paternalistic” Apostle Paul upended five centuries of that system in a single verse when he wrote “Fathers do not exasperate your children; instead bring them up in the training and instruction of the Lord” (Ephesians 6:4). Roman law already demanded obedience to the father under pater familias. So Paul’s revolutionary challenge to the system was not to challenge obedience, but rather to tell the man holding absolute power he had a duty to the best interests of the child rather than himself.

Paul’s words to the Ephesians shaped Western family law for two millennia, including modern American case law (see Pierce v. Society of Sisters, 1925; Wisconsin v. Yoder, 1972). But today a different authority has moved into the space between parent and child; not a patriarch but an “aithority” — an algorithm built by the largest technology corporations on earth and dropped into American classrooms through a partnership with the teachers unions. Nobody sent a permission slip home.

The scale of “the aithority” in schools is already exasperating. In late 2025, Google announced its Gemini AI education tools had reached more than 10 million students across more than 1,000 U.S. institutions. The company rolled out more than 150 new AI features in a single year, trained more than 1 million educators for free, and embedded AI tutoring modules directly into Google Classroom. Separately, Google invested $1 billion in college-level AI integration. In June 2025 the American Federation of Teachers (AFT), the second-largest teachers union in the country, announced a partnership with OpenAI, Microsoft, and Anthropic to accelerate AI adoption in classrooms nationwide. That deal was negotiated between union leadership and three of the most powerful AI companies on earth. Parents were not at the table.

Keep reading

Texas Sues Drone Maker Anzu Over Alleged Ties to CCP

Texas Attorney General Ken Paxton is suing drone-maker Anzu Robotics, alleging that the U.S.-based company misled consumers and concealed its ties with the Chinese communist regime.

Paxton announced the lawsuit on Feb. 19, accusing the Texas-based startup of rebranding products sourced from Chinese drone giant Da Jiang Innovations, commonly known as DJI.

Founded in the southern Chinese city of Shenzhen in 2006, DJI has been flagged by U.S. regulators as a security risk because of its ties to the Chinese Communist Party (CCP).

The U.S. Commerce Department added DJI to its export control list in 2020 for aiding the CCP’s human rights abuses. The Treasury banned U.S.-based individuals from trading DJI shares the following year because of similar concerns. The Pentagon blacklisted DJI as a Chinese military company in 2022, noting that the Chinese regime requires all Chinese companies to allow it to use them as part of its military-civil fusion strategy.

In the lawsuit, Paxton accused Anzu of making false and misleading representations to Texans about its business relationship with DJI, data-sharing practices, and software development.

Anzu markets itself as an American-owned, made-in-Malaysia alternative, but much of its drone technology is licensed from DJI, which receives payments for every drone that Anzu orders, the complaint alleges.

Keep reading

Farmer Hailed as Hero for Rejecting Huge Payment to Turn His Land Into a Giant Data Center

The immense hype surrounding AI has caused enormous data centers to crop up across the country, triggering significant opposition. It’s not just the loss of land: enormous power needs are pushing the grid into meltdown and driving up local electricity prices, catching the attention of politicians and their irate constituents.

One 86-year-old farmer in Cumberland County, Pennsylvania, has heard enough. As local Fox affiliate WPMT reports, Mervin Raudabaugh, who has farmed the surrounding land for more than 60 years, turned down more than $15 million from data center developers in a package deal that involved three neighboring property owners as well.

The farmer was offered $60,000 per acre to build a data center on his property. But giving up his family legacy wasn’t in the cards for him.

“I was not interested in destroying my farms,” he told WPMT. “That was the bottom line. It really wasn’t so much the economic end of it. I just didn’t want to see these two farms destroyed.”

Instead, he sold the development rights in December for just under $2 million to a conservation trust, taking a significant loss but guaranteeing that it would stay farmland in perpetuity.

Users on social media called him a “legend,” and argued he had “more integrity than the whole government.”

“Now that is a real hero in these gutless times!” another user tweeted.

“$15M is huge, but clean water, quiet land, and legacy don’t have a price tag,” another user argued.

The sheer amount of land being earmarked to construct enormous energy and water-sucking data centers is remarkable. A data center in Mount Pleasant, Wisconsin, is set to take up 600 acres, which could cost local residents their land, as ABC News reported this week. Another octogenarian farmer, the 83-year-old Tom Uttech, who has lived on his 52-acre Wisconsin property for almost 40 years, told the broadcaster that he “couldn’t believe” that a local utility company was looking to build “power lines that are 300 or something feet tall, taller than apparently the Statue of Liberty,” through his land to power the data center.

Per ABC, there are more than 3,000 data centers in the US, a number that will soon grow by 1,200 more, which are currently being constructed.

Keep reading

Eric Trump invests in Israeli company behind ‘low cost per kill’ drones

US President Donald Trump’s son Eric is investing heavily in a merger between Israeli drone manufacturer Xtend and Florida-based construction firm JFB Construction Holdings.

Xtend, which made weapons tested on Palestinian civilians during the genocide in Gaza, prides itself on its “low-cost-per-kill” drones. 

The $1.5-billion merger aims to take the Israeli drone maker public. Xtend will then be listed on the NASDAQ.

“I am incredibly proud to invest in companies I believe in. Drones are clearly the wave of the future. Xtend has unbelievable potential,” the US president’s son said in a statement.

Joseph F. Basile III, chief executive officer of JFB, said in a statement that “by pairing XTEND’s operating system and advanced AI capabilities with JFB’s execution, infrastructure and buildout expertise, we see a clear opportunity to accelerate US manufacturing, scale production responsibly, and support a next-generation defense technology platform built in America and ready for the public markets.”

“By combining our platform with JFB, we are acquiring the resources we need to scale our manufacturing capabilities in the US and gaining access to the US public markets,” said Xtend CEO Aviv Shapira. 

Xtend’s various types of drones are documented to have participated in attacks and targeted killings in Gaza. Reports have said an Xtend drone was used to find late Hamas leader Yahya Sinwar, who was killed in battle in October 2024. 

Israel’s use of deadly drone warfare has been widespread and is responsible for heavy civilian casualties in Gaza as well as in Lebanon.

The new merger has been described as an expansion of Trump’s business empire, which has been branching out into various sectors, including AI and cryptocurrency.

One such AI giant that has entered into multi-billion-dollar partnerships with the US and Israeli governments is Palantir, which also played a role in the Gaza genocide.

Palantir CEO Alex Karp openly stated this week that his company is dedicated “to the service of the west and the United States of America” and aims to “disrupt” and “on occasion” to “kill” the enemies of the west and the US. 

Keep reading

Starmer Announces Yet More Censorship

Even more censorship is on the way. The Government has announced plans to force AI chatbots to comply with malicious communications laws – and to give itself Orwellian powers to bring in yet more speech restrictions without Parliamentary oversight. Toby writes about the moves in the Telegraph.

The Government intends to bring forward amendments of its own to the schools Bill that will supposedly close a loophole in the Online Safety Act to make sure AI chatbots comply with Britain’s draconian censorship laws. That will mean that if Grok says something in response to a user prompt that breaches, say, the Malicious Communications Act 1988, which was designed to protect women from obscene phone calls, Ofcom can fine its parent company £18 million or 10% of its annual global turnover. Whichever is the highest.

This will be the death knell of Britain’s burgeoning AI sector, particularly as chatbots become more autonomous. What tech entrepreneur will risk setting up an AI company in the UK, knowing that if a chatbot shares an anti-immigration meme or misgenders a trans person, it could mean a swingeing fine?

Indeed, I wouldn’t be surprised if xAI, along with OpenAI and Anthropic, decide to withdraw access to their chatbots from UK residents. At the very least, we’ll be saddled with lobotomised versions that trot out progressive bromides whenever they’re asked a political question.

In addition, the Government has said it will pass a new law to stop children sending or receiving nude images. Needless to say, that’s already a criminal offence under the Protection of Children Act 1978, so what does the Government have in mind?

It has not said, but I fear it means embedding surveillance software in every smartphone to enable the authorities to monitor users’ activity, no doubt accompanied by mandatory digital ID so no one will be able to hide. Not even the People’s Republic of China does that.

The Government unveiled some other Orwellian measures, but rather than bring them in as revisions to the schools Bill, it will put through amendments that will enable it to make further changes to Britain’s censorship regime via secondary legislation, i.e., it will grant itself sweeping Henry VIII powers.

It’s worth bearing in mind that secondary legislation cannot be amended and allows little time for debate. The Government’s excessive reliance on secondary legislation has been criticised by the House of Lords Constitution Committee and the Delegated Powers and Regulatory Reform Committee.

Keep reading

What the Flock Is This: The Future of Mass Surveillance in the USA

Big Brother’s highway cameras now have AI, and they capture 6 to 12 photos of every car that goes by. Then, they all get uploaded into a huge national database, which out-of-state police and government agencies can access. They are very expensive and keep the information for 30 days.

Cameras are in 49 states.

It’s like being watched by a bunch of prison guards.

Anyone could access it publicly. Then came the lawsuits. What does this mean for mass surveillance in the United States?

Keep reading

UK Government Plans to Use Delegated Powers to Undermine Encryption and Expand Online Surveillance

The UK government wants to scan people’s photos before they send them. Not just children’s photos. Everyone’s.

Technology Secretary Liz Kendall spelled it out on BBC Breakfast, floating a proposal to “block photographs being sent that are potentially nude photographs by anybody or block children from sending those.” That second clause is the tell. Blocking “anybody” from sending potentially nude images requires scanning everybody’s messages. There’s no technical path to that outcome that doesn’t involve reading content the sender assumed was private.

Kendall said the government is conducting a consultation on “whether we should have age limits on things like live streaming” and whether there should be “age limits on what’s called stranger pairing, for example, on games online.” The consultation, she said, will look at all of these. That list now covers messaging apps, photo sharing, gaming, and live streaming. Any feature that lets you share an image with another person potentially falls inside it.

This is how the mandate grows. The government announced a push for new delegated powers on February 16, framing them around age verification for social media and VPNs.

Keep reading

Jeffrey Epstein Recruited NSA Codebreakers for Genome “Manhattan Project”

In the decade before Russia’s invasion of Ukraine in 2022, the U.S. and Russia were engaged in high-stakes exchanges of advanced technology involving the Massachusetts Institute of Technology (MIT) and the Skolkovo Innovation Center—a Russian government-backed technology hub that aimed to jump-start a “venture” innovation ecosystem in Moscow.

Jeffrey Epstein sat at the crossroads of academia, philanthropy, and venture finance as these global capital flows were threatened by the brewing confrontation in Ukraine.

In 2013, during the early cryptocurrency boom, Epstein sought an audience with Vladimir Putin to encourage the Russian president to shift course from the MIT–Skolkovo model. Instead of playing “catch up” with the United States through venture-backed startups, Epstein proposed, Russia could help lead a new financial system based on a novel global currency.

Epstein funded the early development of cryptocurrency through the MIT Digital Currency Initiative, founded in 2015. MIT’s Bitcoin Core Development Fund helped pay bitcoin’s early developers to maintain the open-source software authored by Satoshi Nakamoto, bitcoin’s anonymous inventor. Epstein was an early investor in Coinbase, and he was friends with Brock Pierce, the co-founder of U.S. dollar stablecoin company Tether, which operates, in effect, the world’s largest crypto bank.

Epstein was also recruiting cryptographers to a more ambitious project: hacking the human genome. In an email to a redacted recipient in August 2012, Epstein wrote, “My biology gurus at harvard all agree that the signal intelligence used by the various agencies , could be put to work on breaking the dna code or protein signal problems. breaking foreign codes is the expertise of the us and nsa.” Epstein prompted the recipient to help him recruit “code breakers” from the various intelligence agencies: “it would be great to know which agency button to push.”

Keep reading

Was It a Coincidental Traffic Stop or AI-Powered Surveillance?

Seth Ferranti was driving his Ford pickup on a southeastern Nebraska stretch of the interstate in November 2024 when law enforcement pulled him over, claiming that he had wobbled onto the hard shoulder.

As the Seward County sheriff’s deputies questioned Ferranti, a filmmaker who had spent 21 years in prison for distributing LSD, they allegedly smelled cannabis. Declaring this probable cause for a search, they searched the vehicle and discovered more than 400 pounds of marijuana.

But were those the actual reasons for the stop and search? When Ferranti went on trial, his attorneys presented a license plate reader report produced by the security communications company Motorola Solutions. It revealed Ferranti had been consistently monitored prior to his arrest, including by the local sheriff on the day he was apprehended. (Neither the sheriff’s office nor Motorola responded to Reason‘s requests for comment.)

Ferranti’s legal team argued that it was unconstitutional to surveil somebody based on his previous crimes. The argument did not carry the day: Last month their client was sentenced to up to two and a half years for possession of cannabis with intent to distribute. But the case still raises substantial moral and constitutional questions about both the scale of these public-private surveillance partnerships and the ways they’re being used.

Ferranti had long been a celebrity in the drug-reform world, going back to that LSD arrest in the early ’90s. After that first bust, he jumped bail, went on the lam, landed on the U.S. Marshals’ 15 Most Wanted Fugitives list, and even staged his own drowning to evade the authorities. After he started serving his sentence in 1993, he became a prolific prison journalist, writing the “I’m Busted” column for Vice. The New Jersey native always insisted that his crimes were nonviolent and that the drugs he sold, LSD and cannabis, had medicinal or therapeutic benefits.

After Ferranti came out of prison, his 2017 documentary White Boy—the true story of a teenage FBI informant who became a major cocaine trafficker—was a success on Netflix. He produced a number of further films, including 2023’s Secret History of the LSD Trade. And apparently, the government kept watching him.

It’s been watching a lot of people—and Motorola isn’t the only company helping it. Flock Safety was founded in 2017, and within five years it had tens of thousands of cameras operational. As the American Civil Liberties Union (ACLU) has warned, Flock’s AI-assisted automated license plate recognition (ALPR) system has been undergoing an “insidious expansion” beyond its supposed purposes of identifying vehicles of interest, such as stolen cars and hit-and-run suspects. Immigration and Customs Enforcement has used it to locate illegal migrants, and law enforcement in Texas used it to investigate a self-administered abortion, foreshadowing its potential use as a predictive policing tool for all Americans. Lee Schmidt, a veteran in Virginia, recently learned that the system logged him more than 500 times in four months. 

“I don’t know whether law enforcement officers are using [ALPRs] to do predictive policing,” says Joshua Windham of the Institute of Justice, a public interest law firm that is campaigning to stop the warrantless use of license plate reader cameras. “We know that [Customs and Border Patrol] is using ALPRs generally to stop cars with what they deem ‘suspicious’ travel patterns.”

After reviewing the document cataloguing the Ferranti’s vehicle monitoring, Windham adds: “The records are consistent with an officer either looking up a car in his system to see where else that car was captured by ALPRs, or that car showing up as a ‘hot list’ alert in the Motorola system. But it’s hard to tell, from the records alone, whether the stop was a ‘predictive policing’ stop.”

Ferranti is convinced it was. “There were no warrants, investigations, informants, state police, DEA, or FBI involvement, just Seward County Sheriff’s office [and an] AI-assisted license plate tracking service to perpetuate their outdated War on Drugs mission,” he said in an Instagram post published by his family following his sentencing. “Traveling the highways as a person with a record is now considered [suspicious] activity by the AI.”

Keep reading