OpenAI Staff Warned Leadership on Canada School Shooter Months Before Tumbler Ridge Massacre
By Perplexity News Staff | Published February 22, 2026
In a chilling revelation, OpenAI employees raised internal alarms about a user later identified as the perpetrator of one of Canada’s deadliest school shootings, but leadership opted against alerting authorities months before the tragedy unfolded.
The Wall Street Journal first reported that staff at the ChatGPT-maker flagged suspicious activity on an account belonging to Jesse Van Rootselaar, an 18-year-old from Tumbler Ridge, British Columbia, as early as June 2025. The account was banned for violating OpenAI’s usage policy related to the “furtherance of violent activities,” but the company determined it did not meet their threshold for law enforcement referral—an “imminent and credible risk of serious physical harm to others.”
The Tumbler Ridge Tragedy
On February 10, 2026, Van Rootselaar unleashed horror in the remote mountain town of Tumbler Ridge, a community of just 2,700 residents nestled in the Canadian Rockies, over 1,000 kilometers northeast of Vancouver. The shooter first killed her mother and stepbrother at the family home before proceeding to a nearby school, where she murdered six others, including a 39-year-old teaching assistant and five students aged 12 to 13. Van Rootselaar then died from a self-inflicted gunshot wound.
Police tape cordoned off the school scene days later, as investigators pieced together the motive, which remains unclear. The Royal Canadian Mounted Police (RCMP) noted Van Rootselaar’s prior mental health contacts with law enforcement, adding layers to the investigation.
OpenAI’s Internal Debate and Aftermath
According to reports, OpenAI’s abuse detection systems identified Van Rootselaar’s account in June 2025. Internal discussions ensued, with some employees urging leadership to notify the RCMP. However, the company concluded there was no evidence of “credible or imminent planning,” leading to a ban without further escalation.
Following news of the shooting, OpenAI proactively contacted the RCMP, providing details on the individual’s ChatGPT usage. “Our thoughts are with everyone affected by the Tumbler Ridge tragedy. We proactively reached out to the Royal Canadian Mounted Police with information on the individual and their use of ChatGPT, and we’ll continue to support their investigation,” a company spokesperson stated.

Broader Implications for AI Safety
This incident thrusts OpenAI into the spotlight amid growing scrutiny over AI platforms’ role in detecting and preventing real-world violence. The company’s threshold for law enforcement referrals—requiring imminent harm—has sparked debate on Hacker News and other forums, where users question whether earlier intervention could have averted the deaths.
Experts in AI ethics argue that such cases highlight the tension between user privacy and public safety. “Tech companies are increasingly positioned as first-line sentinels against harm, but their policies must evolve,” said one cybersecurity analyst, speaking on condition of anonymity. OpenAI’s decision not to alert police in June 2025, despite employee concerns, underscores the challenges of balancing proactive moderation with legal constraints.
RCMP Investigation Ongoing
The RCMP continues to probe Van Rootselaar’s online footprint, including her interactions with ChatGPT. Authorities have not disclosed specifics on how the AI tool factored into the planning, but OpenAI’s cooperation is aiding the effort. The shooter’s history of mental health issues may provide crucial context, though no manifesto or clear ideological motive has surfaced.
Tumbler Ridge, known for its natural beauty and coal mining heritage near the Alberta border, is grappling with profound grief. Vigils have drawn hundreds, with community leaders calling for mental health resources and school safety reforms.
Tech Industry Reactions
The revelation has rippled through Silicon Valley. Competitors like Anthropic and xAI have faced similar criticism in the past for content moderation lapses. OpenAI, valued at over $150 billion, maintains robust abuse detection but insists its policies prioritize verified threats.
“We identified the account via abuse detection efforts for furtherance of violent activities… but did not identify credible or imminent planning.” — OpenAI Statement
Critics, including some OpenAI insiders cited in the WSJ report, contend the bar for intervention is set too high. Discussions on platforms like Hacker News reveal public frustration: “Employees asked leadership to alert authorities, but it didn’t happen,” one commenter noted, linking to the original WSJ article.
Policy and Legal Ramifications
Canadian lawmakers are eyeing stricter regulations on AI firms operating domestically. Privacy laws like PIPEDA complicate referrals, requiring companies to navigate a minefield of liability risks. U.S.-based OpenAI must comply with both nations’ rules, amplifying the stakes.
In the wake of the shooting, OpenAI has not announced policy changes, but internal reviews are likely underway. The tragedy serves as a stark reminder of AI’s dual potential—for innovation and peril.
As Tumbler Ridge heals, questions linger: Could earlier action have saved lives? OpenAI’s response—that no imminent threat was evident—may satisfy legally, but it fuels calls for more aggressive monitoring in an era where digital breadcrumbs often precede real-world violence.