
OpenAI says it would've flagged Tumbler Ridge shooter's account to police under new protocol
CBC
OpenAI says it would've referred the Tumbler Ridge, B.C., shooter's ChatGPT account to police under new safety policies the company has implemented in recent months.
Ann O'Leary, OpenAI's vice-president of global policy, wrote in a letter to Artificial Intelligence Minister Evan Solomon that the California-based company partnered with mental health and law enforcement experts "several months ago" to update its safety protocol.
"Mental health and behavioural experts now help us assess difficult cases, and we have made our referral criteria more flexible to account for the fact that a user may not discuss the target, means and timing of planned violence in a ChatGPT conversation but that there may be potential risk of imminent violence," O'Leary wrote in the letter that has been shared with media.
It's not clear from the letter when the new protocol took effect. But the company didn't flag Jesse Van Rootselaar's account to police when it banned the account last June.
OpenAI has said Van Rootselaar's activities didn't meet the company's threshold for informing law enforcement because it didn't identify credible or imminent planning at the time.
"With the benefit of our continued learnings, under our enhanced law enforcement referral protocol, we would refer the account banned in June 2025 to law enforcement if it were discovered today," O'Leary's letter reads.
Earlier this month, Van Rootselaar killed her mother and half-brother at the family home before going to the local secondary school, where she killed five students, an educational assistant and then herself.
O'Leary said in the letter that the company found a second account belonging to Van Rootselaar after her name was made public in the wake of the murders and shared the account with police.
The letter comes after senior company officials met with Solomon and other ministers.
Solomon said after the meeting that he was disappointed the company didn't provide thorough answers.
"We expected [OpenAI] to have some concrete proposals that we could understand, that [they] had changed their protocols in the wake of the horrific tragedy in Tumbler Ridge. But we did not hear any substantial new safety protocols outside of some changes to their model," Solomon said Tuesday night.
O'Leary's letter outlined some commitments the company is taking to address the government's concerns, including: establishing a direct point of contact with Canadian law enforcement, upgrading its model to allow the company to direct users to local mental health supports when warranted and strengthening its detection system to help identify repeat policy violators.
A spokesperson from Solomon's office said the government is reviewing the letter and "will have more to say in the coming days."













