An 18-year-old transgender teenager in Tumbler Ridge, British Columbia, is alleged to have used AI model ChatGPT in the run-up to a February 10 school shooting that killed eight people, including her mother, her 11-year-old brother, five students and an education assistant, before she took her own life. OpenAI had already flagged and banned one of Jesse Van Rootselaar’s accounts months earlier for “misuses of our models in furtherance of violent activities,” yet did not alert police. According to a civil claim filed in British Columbia, roughly a dozen employees identified the chats as signalling imminent risk, leadership refused to contact law enforcement, but the shooter later opened a second account and continued planning.
What Happened in Tumbler Ridge?
The massacre began at home. Police said Van Rootselaar killed her mother and sibling before going to a school in Tumbler Ridge, where an educator and five students were shot dead. Two others were hospitalised with serious injuries. Reuters described it as one of Canada’s worst mass killings. Police also said they had previously removed guns from the home and were aware of the teenager’s mental health history.
That would already be a story of institutional failure. But the AI angle makes it worse. OpenAI later admitted it had banned Van Rootselaar’s ChatGPT account in June 2025 after detecting violent misuse. The company said it considered referring the case to law enforcement, but decided the activity did not meet its threshold because it could not identify “credible or imminent planning.” Months later, eight people were dead.
OpenAI then told Canadian officials that, under its newer and “enhanced” law-enforcement referral protocol, the same initial account ban would now be referred to police. That is an extraordinary concession. It amounts to an admission that the safeguard in place at the time was inadequate to the risk in front of it.
The Lawsuit Against OpenAI / ChatGPT
The most serious details now sit inside a civil claim brought by the family of a surviving victim. The filing alleges that Van Rootselaar, then 17, spent days describing gun-violence scenarios to ChatGPT in late spring or early summer 2025. It says the platform’s monitoring system flagged those conversations, routed them to human moderators, and that approximately 12 OpenAI employees identified them as indicating an imminent risk of serious harm and recommended that Canadian law enforcement be informed. The claim alleges leadership refused that request and merely banned the first account.
The same filing alleges the shooter later opened a second OpenAI account, used it to continue planning a mass-casualty event, and received “mental health counselling and pseudo-therapy” from ChatGPT. It further alleges the chatbot equipped the shooter with information on methods, weapons, and precedents from other mass casualty events. These are allegations, not proven findings, but if they are even broadly accurate, the case is not simply about a product being misused. It is about a company building an intimate, persuasive machine that could flag danger, simulate empathy, and still fail to stop the person it had already flagged.
The filing also accuses GPT-4o of being deliberately designed in a more human, warmer, more sycophantic style that could foster psychological dependency and reinforce users rather than redirect them. These claims fit a wider concern now being raised by researchers, families, and even some people inside the industry: a chatbot that is rewarded for being agreeable can become dangerous precisely when a human being most needs resistance.