Skip to content

OpenAI Employees Flagged Canada School Shooting Suspect’s Violent ChatGPT Activity Months Before Tragedy

OpenAI Employees Flagged Canada School Shooting Suspect’s Violent ChatGPT Activity Months Before Tragedy

Tumbler Ridge, British Columbia – OpenAI, the creators of ChatGPT, considered alerting Canadian authorities about a user engaged in “furtherance of violent activities” last year, months before that individual carried out one of the deadliest school shootings in Canadian history.[1]

The suspect, 18-year-old Jesse Van Rootselaar, killed eight people in the remote town of Tumbler Ridge, British Columbia, on February 10, 2026, before dying from a self-inflicted gunshot wound. The attack began at Van Rootselaar’s family home, where she fatally shot her mother and stepbrother, before proceeding to a nearby school.[1]

Among the victims at the school were a 39-year-old teaching assistant and five students aged 12 to 13. Tumbler Ridge, a community of about 2,700 residents nestled in the Canadian Rockies more than 1,000 kilometers northeast of Vancouver, has been left reeling from the tragedy.[1]

OpenAI’s Internal Debate and Account Ban

In June 2025, OpenAI’s abuse detection systems identified Van Rootselaar’s account for suspicious activity related to violence. The San Francisco-based company weighed referring the case to the Royal Canadian Mounted Police (RCMP) but ultimately decided the activity did not meet their threshold for law enforcement referral – defined as an “imminent and credible risk of serious physical harm to others.”[1][2]

Instead, OpenAI banned the account for violating its usage policy. “We did not identify credible or imminent planning,” the company stated in a response following the shooting’s revelation by The Wall Street Journal.[1]

Police tape at Tumbler Ridge school after shooting
Police tape surrounds the school in Tumbler Ridge, B.C., following the February 10 mass shooting. (AP Photo)

After news of the shooting emerged, OpenAI employees proactively contacted the RCMP, providing details on the individual’s ChatGPT usage. “Our thoughts are with everyone affected by the Tumbler Ridge tragedy. We proactively reached out to the Royal Canadian Mounted Police with information on the individual and their use of ChatGPT, and we’ll continue to support their investigation,” an OpenAI spokesperson said.[1]

Suspect’s Troubled History

Authorities have confirmed Van Rootselaar had prior mental health interactions with police, though the motive for the rampage remains under investigation and unclear.[1]

The incident has thrust OpenAI into the spotlight, raising questions about the responsibilities of AI companies in monitoring user behavior and the thresholds for intervening with law enforcement. While OpenAI maintains its decision was based on a lack of imminent threat, the hindsight connection to the shooting has sparked debate over whether earlier action could have prevented the loss of life.

Broader Implications for AI Safety

This case highlights ongoing challenges in the AI industry regarding content moderation and threat detection. OpenAI’s systems flagged the account for “references to violence,” leading to the ban, but internal discussions about police notification underscore the delicate balance between privacy, free speech, and public safety.[2]

Experts note that AI platforms like ChatGPT process vast amounts of user data daily, making proactive monitoring a complex ethical and technical endeavor. “Thresholds like ‘imminent risk’ are designed to prevent overreach, but tragic outcomes like this test those boundaries,” said one cybersecurity analyst not involved in the case.

The RCMP has not publicly detailed how OpenAI’s post-shooting information factors into their probe, but the company’s cooperation signals a willingness to assist amid growing scrutiny.

Community in Mourning

In Tumbler Ridge, grief counseling services have been established, and a makeshift memorial at the school site grows daily with flowers, teddy bears, and messages of condolence. Local leaders describe the town as “shattered,” with schools closed indefinitely as investigators process the scene.[1]

“This is a small community where everyone knows each other. The pain is unimaginable.” – Tumbler Ridge Mayor

The shooting marks one of Canada’s worst mass killings at a school, drawing comparisons to previous tragedies and reigniting national conversations on gun control, mental health access in rural areas, and youth violence.

OpenAI’s Response and Future Steps

OpenAI has reiterated its commitment to safety, stating it continuously refines detection tools to identify harmful use. The company’s disclosure comes amid broader regulatory pressures on Big Tech to combat online extremism and violence incitement.

Canadian officials have not indicated plans to subpoena OpenAI for further records, but the incident could influence future policies on AI accountability. Privacy advocates caution against lowering reporting thresholds, warning of potential misuse against non-threatening users.

As the investigation unfolds, the Tumbler Ridge shooting serves as a stark reminder of technology’s unintended intersections with real-world violence. Families of the victims continue to seek answers, while OpenAI grapples with the aftermath of a decision made in the shadows of what was to come.

This story is developing. Additional details from the RCMP investigation are expected in the coming weeks.

Table of Contents