
OpenAI had banned account of Tumbler Ridge, B.C., shooter
CBC
OpenAI, the American company behind ChatGPT, has said that it banned the account associated with the teenager behind a mass shooting in Tumbler Ridge, B.C., last June.
The company said, in response to questions from CBC News, that Jesse Van Rootselaar's account was detected via automated tools and human investigations that "identify misuses of our models in furtherance of violent activities."
In its statement, OpenAI said that the account's activity in June 2025 didn't meet the "higher threshold required" to refer it to law enforcement.
The threshold, according to the company, is that the case involves an "imminent and credible risk" of serious physical harm, and Van Rootselaar's use of ChatGPT didn't meet that bar in June 2025.
Van Rootselaar has been identified by RCMP as the person who killed eight people in the northeast B.C. community on Feb. 10, including five children and an education assistant at Tumbler Ridge Secondary School, before killing herself.
The story about her ChatGPT account was first reported by the Wall Street Journal.
In its statement, OpenAI said that after learning of the shooting, it proactively reached out to RCMP with information on Van Rootselaar and her use of ChatGPT.
An RCMP spokesperson confirmed to CBC News that the platform reached out after the shooting, but said OpenAI had only flagged the account internally at first.
"What I can say is that as part of the investigation, digital and physical evidence is being collected, prioritized, and methodically processed," Staff Sgt. Kris Clark said in a statement.
"This includes a thorough review of the content on electronic devices, as well as social media and online activities."
In its statement, referring to its threshold to refer cases to law enforcement, the company argues that "over-enforcement" could be distressing for young people and their family, and it could also raise privacy concerns.
OpenAI also says that its chatbot is trained to avoid giving people advice if it could result in immediate physical harm to people.
"When we detect users who are planning to harm others, we route their conversations to specialized pipelines where they are reviewed by a small team trained on our usage policies and who are authorized to take action, including banning accounts," the company says on its website.
OpenAI adds that it is reviewing the circumstances of the Tumbler Ridge case to see if improvements can be made to its criteria for referring cases to law enforcement.













