Skip to content

OpenAI Staff Debated Alerting Police On Tumbler Ridge Shooter Months Before Deadly Attack

OpenAI Staff Debated Alerting Police on Tumbler Ridge Shooter Months Before Deadly Attack

By Perplexity News Staff

Tumbler Ridge, British Columbia – In a chilling revelation, OpenAI employees raised internal alarms about a user later identified as the perpetrator of one of Canada’s deadliest school shootings months before the tragedy unfolded, but company leaders ultimately decided against notifying authorities.[1][2]

The mass shooting in the remote mountain town of Tumbler Ridge on February 10, 2026, claimed the lives of eight people, including five students aged 12 to 13, a 39-year-old teaching assistant, and two family members, before the 18-year-old suspect, Jesse Van Rootselaar, died from a self-inflicted gunshot wound.[1]

According to reports from The Wall Street Journal, Van Rootselaar’s interactions with ChatGPT triggered automated flags in June 2025 for content involving gun violence scenarios described over several days.[2] This activity prompted about a dozen OpenAI staff members to debate whether to alert the Royal Canadian Mounted Police (RCMP), with some urging leadership to take action.[2][4]

Internal Debate and Company Threshold

OpenAI confirmed it identified Van Rootselaar’s account through abuse detection efforts targeting “furtherance of violent activities.” The company banned the account in June 2025 for violating its usage policy but determined the behavior did not meet their threshold for law enforcement referral.[1][3]

“The threshold for referring a user to law enforcement is whether the case involves an imminent and credible risk of serious physical harm to others,” an OpenAI spokesperson stated. At the time, no such imminent planning was detected.[1][2]

Following the shooting, OpenAI proactively contacted the RCMP with details on the individual’s ChatGPT usage. “Our thoughts are with everyone affected by the Tumbler Ridge tragedy. We proactively reached out to the Royal Canadian Mounted Police with information on the individual and their use of ChatGPT, and we’ll continue to support their investigation,” the spokesperson added.[1]

The Tumbler Ridge Tragedy

Tumbler Ridge, a community of about 2,700 residents nestled in the Canadian Rockies more than 1,000 kilometers northeast of Vancouver, was shattered by the violence. Police reports indicate Van Rootselaar first killed her mother and stepbrother at the family home before proceeding to a nearby school.[1]

The RCMP noted that Van Rootselaar had prior mental health contacts with police, though the motive for the attack remains under investigation and unclear.[1]

Police tape surrounded the school for days after the February 10 incident, as the tight-knit community mourned the loss of young lives and grappled with the horror of the event.[1]

Broader Implications for AI Safety

This incident has reignited debates over the responsibilities of AI companies in monitoring user activity for potential real-world harm. OpenAI’s automated systems and human reviews detected the concerning behavior, but the decision not to escalate highlights the challenges in distinguishing between hypothetical discussions and genuine threats.[2][4]

Discussions on platforms like Hacker News reflect public scrutiny, with users questioning OpenAI’s internal processes and the balance between user privacy and public safety.[4]

Experts in AI ethics have long warned that large language models like ChatGPT could be misused for planning violence, prompting calls for stricter reporting protocols. However, companies must navigate legal constraints, as unwarranted referrals could lead to privacy violations or false alarms.[1][2]

OpenAI’s Response and Ongoing Support

In statements to multiple outlets, including AP and CBC News, OpenAI emphasized its commitment to safety. The company’s tools flagged the account swiftly, leading to a ban, and post-incident cooperation with authorities.[1][2]

CityNews Vancouver reported that the account was banned specifically for references to violence, underscoring the role of internal flagging systems.[3]

As the RCMP continues its probe, questions persist about whether earlier intervention could have prevented the deaths. OpenAI maintains that hindsight reveals no clear indicators of the impending attack at the time of detection.[1]

Community in Mourning

Tumbler Ridge, near the Alberta border, is known for its natural beauty and coal mining history, not violence. The shooting has left scars on a population unaccustomed to such horror. Victims’ families and survivors are receiving support, while local leaders call for mental health resources in rural areas.[1]

The tragedy draws parallels to other school shootings worldwide, amplifying discussions on gun access, youth mental health, and technology’s unintended roles in societal issues.

Looking Ahead

As investigations unfold, this case may influence AI governance. Policymakers, tech firms, and law enforcement could collaborate on refined thresholds for threat reporting, ensuring tools designed to assist do not inadvertently enable harm.

OpenAI’s experience serves as a stark reminder of the real-world stakes in AI deployment. While the company acted within its guidelines, the human cost underscores the need for evolving safeguards in an era of powerful generative technologies.

(Word count: 1028)

Table of Contents