Skip to content

OpenAI Staff Flagged Canada School Shooter’s ChatGPT Activity Months Before Tragedy, But Leaders Held Back From Police Alert

OpenAI Staff Flagged Canada School Shooter’s ChatGPT Activity Months Before Tragedy, But Leaders Held Back from Police Alert

Tumbler Ridge, British Columbia — Employees at OpenAI, the creators of ChatGPT, raised serious concerns about a user’s violent interactions with the AI chatbot months before the individual carried out one of Canada’s deadliest school shootings, according to a bombshell report from The Wall Street Journal.[1][2]

The suspect, 18-year-old Jesse Van Rootselaar, killed eight people in the remote town of Tumbler Ridge on February 10, 2026, before dying from a self-inflicted gunshot wound. The attack began at Van Rootselaar’s family home, where she fatally shot her mother and stepbrother, before targeting a nearby school, claiming the lives of a 39-year-old teaching assistant and five students aged 12 to 13.[1]

Police tape surrounds Tumbler Ridge school after mass shooting
Police tape surrounds the school in Tumbler Ridge, British Columbia, following the February 10 mass shooting. (AP Photo)

Internal Debates at OpenAI

Back in June 2025, OpenAI’s automated abuse detection systems flagged Van Rootselaar’s account for “furtherance of violent activities.” Over several days, the user described scenarios involving gun violence in chats with ChatGPT, alarming about a dozen employees who debated whether to alert authorities.[2]

Staff urged company leaders to contact the Royal Canadian Mounted Police (RCMP), but OpenAI determined the activity did not meet its strict threshold for law enforcement referral: an “imminent and credible risk of serious physical harm to others.” Instead, the company banned the account for violating its usage policy.[1][2][3]

“Our thoughts are with everyone affected by the Tumbler Ridge tragedy. We proactively reached out to the Royal Canadian Mounted Police with information on the individual and their use of ChatGPT, and we’ll continue to support their investigation,” an OpenAI spokesperson said.[1]

After news of the shooting broke, OpenAI employees contacted the RCMP with details on Van Rootselaar’s ChatGPT usage, vowing ongoing cooperation.[1]

The Tumbler Ridge Horror

Tumbler Ridge, a town of about 2,700 residents nestled in the Canadian Rockies more than 1,000 kilometers northeast of Vancouver, was shattered by the violence. The motive remains unclear, though police noted Van Rootselaar had prior mental health contacts.[1]

The RCMP confirmed the shooter first attacked her family before moving to the school. The community, near the Alberta border, is still reeling from the loss.[1]

Questions Over AI Responsibility

The Wall Street Journal’s exclusive reporting has ignited debates about AI companies’ obligations to monitor and report potentially dangerous user behavior. OpenAI’s decision not to alert police in June 2025, despite internal alarms, raises thorny questions: When does online chatter cross into real-world threats? And who decides?[2]

Experts note that tech firms face a delicate balance. Over-reporting could infringe on privacy and free speech, while under-reporting risks public safety. OpenAI’s threshold emphasizes imminence, but critics argue flagged violent fantasies warrant closer scrutiny, especially with a history of mental health issues.[1][2]

OpenAI headquarters in San Francisco
OpenAI, based in San Francisco, banned the suspect’s account but did not alert police at the time. (File Photo)

Broader Implications for AI Safety

This incident spotlights growing scrutiny on AI platforms amid rising concerns over misuse. ChatGPT and similar tools have been linked to everything from misinformation to planning crimes, prompting calls for stricter regulations.

OpenAI has invested heavily in safety measures, including automated flagging and human review. Yet, the Tumbler Ridge case exposes limits: detecting violence is one thing; predicting real-world action is another.[3]

RCMP investigators continue probing the shooting, with OpenAI’s post-tragedy cooperation providing valuable data. Community members in Tumbler Ridge held vigils this week, mourning the young victims and grappling with unimaginable loss.

Community in Mourning

Local leaders described the town as “heartbroken.” Schools remain closed as counseling services expand. Families of the victims, including the teaching assistant who heroically tried to protect students, shared stories of resilience amid grief.

Van Rootselaar’s prior police interactions for mental health add layers to the tragedy, underscoring gaps in intervention. Advocates now push for better mental health resources in rural Canada.

As investigations unfold, the OpenAI revelations could spur policy changes. Lawmakers in the U.S. and Canada may examine how AI firms handle threat detection, potentially mandating lower thresholds for reporting.

OpenAI reiterated its commitment to safety, but for the people of Tumbler Ridge, the “what ifs” linger. Could an earlier tip have prevented this horror? The answer may shape the future of AI accountability.

Table of Contents