What the Tumbler Ridge mass shooting reveals about regulating AI | CBC News


Following last month’s mass shooting in Tumbler Ridge, B.C., questions are mounting about what artificial intelligence companies should do when users post disturbing content online.

It comes after OpenAI, the company behind ChatGPT, acknowledged it flagged and banned an account belonging to 18-year-old Jesse Van Rootselaar about half a year before she killed eight people, most of them children, and then herself on Feb. 10.

The U.S. tech company said it did not alert police at the time because the account’s activity in June 2025 didn’t meet the “higher threshold required.”

OpenAI’s response prompted anger and frustration from provincial and federal officials, including from B.C. Premier David Eby, who said the tragedy might have been prevented had the company alerted authorities earlier.

However, some experts say knowing when to flag a user interacting with a chatbot is complicated.

What do we know about OpenAI’s decision?

OpenAI has said that Van Rootselaar’s account was detected via automated tools and human investigations that “identify misuses of our models in furtherance of violent activities.”

The account was banned but the company said the content did not meet its internal standard for referral to law enforcement, which requires signs of “imminent and credible risk” of serious physical harm.

WATCH | AI minister wants to see detailed safety plan from OpenAI:

OpenAI must be more clear about commitment to change: AI minister

OpenAI says it will make changes to its safety and security protocols in the wake of the Tumbler Ridge, B.C., shooting. But Artificial Intelligence Minister Evan Solomon said in a statement he wants to see a detailed plan — and that he wants to meet with CEO Sam Altman.

After the shooting, OpenAI discovered the teen had created a second account and gotten around the ban. It said that after learning of the shooting, it proactively reached out to RCMP with information on Van Rootselaar.

What exactly the 18-year-old discussed with its ChatGPT bot has not been disclosed and it isn’t known what the chatbot said in response. 

Are AI companies legally required to report threats in Canada?

No. Canada does not currently have a regulatory framework specific to AI.

While existing laws in areas such as health and criminal activity apply to certain uses of AI, there is no federal law requiring AI companies to report potentially violent users to police.

Alan Mackworth, a professor emeritus of computer science at the University of British Columbia, says reporting standards are voluntary and set by individual companies.

WATCH | Coalition calls on government to retable Online Harms Act:

Coalition calls on government to retable Online Harms Act before 2026

Andrea Chrysanthou, chair of the board for Children First Canada, launched the Countdown for Kids campaign on Thursday, part of a coalition calling on the Liberal government to reintroduce online harms legislation before the end of 2025. At a news conference in Ottawa, Chrysanthou said: ‘We’ve waited years for action.’

“We just can’t rely on the companies to voluntarily stand up,” Mackworth said. “There needs to be some public accountability by having a regulatory agency with enforcement powers to check standards.” 

The UBC professor says Canada is behind the European Union, which passed the AI Act in 2024, and the United Kingdom’s Online Safety Act.

Canada’s Liberal government introduced an online harms bill in 2024, which would have imposed new requirements on social media companies and created an online regulator. But the bill never became law because the 2025 election was called.

Mackworth argues that AI companies should have something like a “duty to report,” similar to teachers or doctors who are legally required to report suspected harm to a minor. 

Where’s the line between safety and privacy?

OpenAI has now made some commitments in the wake of the tragedy, some of which include establishing a direct point of contact with Canadian law enforcement, upgrading its model to allow the company to direct users to local mental health supports when warranted and strengthening its detection systems.

According to the tech firm, under its updated referral system, the company would refer the shooter’s account “banned in June 2025 to law enforcement if it were discovered today.”

Moira Aikenhead, a lecturer at UBC’s Peter A. Allard School of Law, cautions against assuming that reporting conversations with AI would necessarily have stopped the tragedy.

WATCH | ‘Something could have been done’ if OpenAI reported: Elizabeth May:

‘Something could have been done’ if OpenAI reported what they knew about Tumbler Ridge shooter: May

On Friday, Green Party Leader Elizabeth May reacted to new details about OpenAI’s protocol responding to the Tumbler Ridge, B.C., shooter’s banned ChatGPT account — and a recently reported second account. ‘Something could have been done if only the rich bastards in the AI industry had reported what they knew,’ May said.

“People in the wake of tragedies want answers,” she said. “But, when we’re looking at creating a new digital policy, we need to be really cautious that we avoid knee jerk reactions.” 

ChatGPT is not a public forum but a private interaction between a user and a company. However, the UBC lecturer says if companies begin reporting private queries, many Canadians would have serious privacy concerns.

Can AI systems reliably detect real threats?

Context is another issue, Aikenhead raises, because people can be asking chatbots anything without really meaning to cause harm.

“You can have kids typing in ‘How could I commit the perfect crime?’ out of curiosity to see what ChatGPT says,” she added. “That could potentially put this child on the RCMP’s radar.”

Aikenhead argues that if reporting thresholds are expanded, they must be transparent and clearly defined and shaped by government regulation, not private firms. 

A man with his head bowed at a makeshift memorial with bouquets of flowers on the ground around a tree.
A memorial to the victims of the Tumbler Ridge Secondary School shooting is pictured on Thursday, Feb. 12. (Ben Nelms/CBC)

Even with regulation, experts say there are technical limits.

Vered Shwartz, an assistant professor at UBC who specializes in AI, says tech companies review massive volumes of conversations, and judging whether content reflects fantasy, curiosity or real intent is not straightforward.

“The question of reporting someone before something happens is a very hard question,” Shwartz said. “It’s kind of similar to how the police can’t arrest someone unless they have grounds to believe that they are going to commit a crime.”

She says that users could be wrongly flagged, and gave the example of a father whose account was disabled by Google after a photo he sent of his infant son to a doctor was flagged as “harmful content.”

WATCH | Minister troubled by talks with OpenAI:

Minister troubled by talks with OpenAI after Tumbler Ridge, B.C., shooting

Canada’s Minister of A.I. called the makers of ChatGPT and OpenAI to Ottawa to talk about safety and risk assessment protocols after learning the company banned the Tumbler Ridge, B.C., shooter for troubling posts but didn’t report them to police.

What’s next?

Artificial Intelligence Minister Evan Solomon says OpenAI’s recent commitments to adjust its policies do not go far enough.

Solomon says he is meeting with OpenAI CEO Sam Altman this week to seek further clarity on stronger safety protocols. 

He says he will also sit down with other major tech companies, adding that all regulatory options remain on the table.