Precrime: Months Before Massacre, OpenAI Worried About Canada’s Trans Mass Killer

Months before a Canadian man in a dress went on a Feb 10 rampage, killing his mother and half-brother at home before slaughtering five students and an education assistant at a secondary school where he was formerly a student, employees at OpenAI were deeply troubled by his interactions with the firm’s ChatGPT AI chatbot.   

As first reported by the Wall Street Journal, Jesse Van Rootselaar’s ChatGPT activity was flagged by the company’s automated review system. When employees took a look at what he’d been up to over a several-day period in June 2025, they were alarmed. About a dozen of them debated what they should do.

Some were convinced Van Rootselaar’s descriptions of gun-violence scenarios signaled a substantial risk of real-world bloodshed, and implored their supervisors to notify police, according to the Journal’s unnamed sources. They opted against doing so, and a spokeswoman now says they’d concluded Van Rootselaar’s posts didn’t cross the threshold of posing a credible and imminent risk of serious harm. Instead, the company decided only to ban his account. 

About seven months after his disturbing series of interactions with ChatGPT, police say he killed 8 people and injured 25 more before killing himself in the school he’d attended earlier. Van Rootselaar’s social media and YouTube accounts contained transgender symbolism as well as the online name “JessJessUwU” (a meme phrase that people may recognize from the bullet casings tied to the gay suspect charged in the assassination of Charlie Kirk). 

Keep reading

Unknown's avatar

Author: HP McLovincraft

Seeker of rabbit holes. Pessimist. Libertine. Contrarian. Your huckleberry. Possibly true tales of sanity-blasting horror also known as abject reality. Prepare yourself. Veteran of a thousand psychic wars. I have seen the fnords. Deplatformed on Tumblr and Twitter.

Leave a comment