OpenAI agrees to strengthen safeguards following B.C. mass shooting: minister – BC | Globalnews.ca


Federal Artificial Intelligence Minister Evan Solomon says the CEO of OpenAI has agreed to take several actions to bolster safety, including providing a report outlining the new systems the firm is developing to identify high-risk offenders and policy violators.

OpenAI agrees to strengthen safeguards following B.C. mass shooting: minister – BC | Globalnews.ca

A statement from Solomon following his meeting Wednesday with Sam Altman says the minister will also ask the Canadian AI Safety Institute to examine the company’s model and provide expert technical advice to his office.

The meeting follows the revelation that OpenAI banned the mass shooter in Tumbler Ridge, B.C., from using its ChatGPT chatbot last June due to worrisome interactions but did not alert law enforcement before the killings last month.


Click to play video: 'South Peace MLA calls for full Tumbler Ridge inquiry'


South Peace MLA calls for full Tumbler Ridge inquiry


OpenAI has said new protocols would have resulted in Jesse Van Rootselaar’s interactions being flagged to police, but Solomon says the tragedy “demands answers and stronger safeguards when powerful AI technologies are involved.”

Story continues below advertisement

Solomon says the actions Altman has agreed to take include establishing a direct point of contact with RCMP and implementing safety protocols that direct people “experiencing distress” to appropriate local services.

For news impacting Canada and around the world, sign up for breaking news alerts delivered directly to you when they happen.

Get breaking National news

For news impacting Canada and around the world, sign up for breaking news alerts delivered directly to you when they happen.

The minister says Altman also confirmed the company would apply its new safety standards retroactively and review previously flagged cases.


Click to play video: 'AI minister ‘disappointed’ with OpenAI meeting on Tumbler Ridge shooter'


AI minister ‘disappointed’ with OpenAI meeting on Tumbler Ridge shooter


“This will determine whether additional incidents that would have been referred to law enforcement under OpenAI’s new safety standards were missed, and ensure they are promptly reported to the RCMP,” Solomon’s statement says.

It says the company has also committed to assessing how they would include Canadian privacy, mental health and law enforcement experts into the process to identify and review high-risk cases involving Canadian users of OpenAI technology.

Van Rootselaar fatally shot eight people in Tumbler Ridge on Feb. 10, including six children, before killing herself.

Story continues below advertisement

B.C. Attorney General Niki Sharma said Eby would meet Altman to find out whether the company could have prevented the shootings.


Click to play video: 'Inquest to be held into Tumbler Ridge school shooting'


Inquest to be held into Tumbler Ridge school shooting


Sharma said there is a larger question for Ottawa when it comes to regulating and overseeing platforms like OpenAI.

The Altman meetings come after B.C.’s chief coroner, Dr. Jatinder Baidwan, on Tuesday announced an inquest into the shootings that will consider the role of artificial intelligence.

Sharma said she hopes OpenAI will participate in the inquest and share whatever it knows.

This report by The Canadian Press was first published March 4, 2026.


&copy 2026 The Canadian Press


AI minister to meet with OpenAI’s Sam Altman on Tumbler Ridge shooting | Globalnews.ca


Canada’s artificial intelligence minister will meet virtually with OpenAI CEO Sam Altman on Wednesday afternoon to discuss changes the company has committed to making after last month’s mass shooting in Tumbler Ridge, B.C.

OpenAI agrees to strengthen safeguards following B.C. mass shooting: minister – BC | Globalnews.ca

The timing was confirmed to Global News by a spokesperson for AI Minister Evan Solomon’s office.

Solomon sought the meeting with Altman after OpenAI said last week it would enhance its police referral and repeat offender detection practices, among other new safety measures, after it did not flag the Tumbler Ridge shooter’s ChatGPT activity to police last summer.

The company, which said it disabled Jesse VanRootselaar’s account in June over “violent” activity, said in a statement that it had also discovered a second ChatGPT account linked to her name after the shooting, despite a system that flags repeat policy offenders.

Story continues below advertisement

OpenAI ultimately alerted RCMP to the shooter’s ChatGPT activity after the mass shooting, in which eight people died and dozens more were injured. The shooter took her own life.

For news impacting Canada and around the world, sign up for breaking news alerts delivered directly to you when they happen.

Get breaking National news

For news impacting Canada and around the world, sign up for breaking news alerts delivered directly to you when they happen.

OpenAI acknowledged in its statement last week that, “under our enhanced law enforcement referral protocol, we would refer the account banned in June 2025 to law enforcement if it were discovered today.”


Click to play video: 'OpenAI representatives summoned to Ottawa over Tumbler Ridge shooting'


OpenAI representatives summoned to Ottawa over Tumbler Ridge shooting


Solomon said in a statement last week that OpenAI’s commitments, while welcome, did not include “a detailed plan for how these commitments will be implemented in practice” and that more clarity was needed.

“I will be meeting directly with OpenAI CEO Sam Altman next week to seek further clarity and to ensure that the commitments made are translated into concrete action,” he wrote.

Altman has yet to comment publicly on the Tumbler Ridge shooting, the commitments his company has made in response, or his meeting with Solomon.

Story continues below advertisement

British Columbia Premier David Eby has said he will also meet with Altman, but a date for that meeting has not yet been announced. Global News has reached out to Eby’s office for comment.

OpenAI’s commitments came after company representatives met with Solomon and three other federal ministers in Ottawa to discuss its safety practices.

The ministers left the meeting “disappointed” that OpenAI did not present “concrete actions” it would take in response, while experts and opposition MPs called on the government to step in with regulations.

Solomon has not ruled out legislation to address police referral practices for AI companies that detect violent behaviour on their platforms. The minister has said he will meet with other companies in the coming weeks to discuss the issue.

Eby has called for a national standard for police referrals, calling OpenAI’s improvements and commitments for change “cold comfort for the people in Tumbler Ridge.”


&copy 2026 Global News, a division of Corus Entertainment Inc.


AI minister wants more clarity on OpenAI’s changes after Tumbler Ridge | Globalnews.ca


Artificial Intelligence Minister Evan Solomon says he wants more clarity on OpenAI’s committed safety protocol changes after the Tumbler Ridge, B.C., mass shooting, and isn’t ruling out legislative changes to address the issue.

OpenAI agrees to strengthen safeguards following B.C. mass shooting: minister – BC | Globalnews.ca

The company behind ChatGPT on Thursday said it would enhance its police referral and repeat offender detection practices, after it did not elevate the shooter’s AI chatbot activity to police months before she killed eight people and wounded dozens of others.

In a statement Friday, Solomon said OpenAI’s statement did not include “a detailed plan for how these commitments will be implemented in practice.”

He said he would be meeting with CEO Sam Altman next week to “seek further clarity” and assurances of “concrete action.”

“The tragedy in Tumbler Ridge has raised serious questions about how digital platforms respond when credible warning signs of violence emerge,” the minister said. “Canadians deserve greater clarity about how human review decisions are made, how escalation thresholds are applied, and how privacy considerations are balanced with public safety.

Story continues below advertisement

“We will be seeking further clarity on how human review is conducted and whether Canadian context and best practices are appropriately embedded in those decisions. I will also be consulting with my cabinet colleagues on additional options.”

Solomon added he would also be meeting with other AI companies in the coming weeks “to ensure there is a consistent and clear approach to escalation, local coordination, and youth protection.”

For news impacting Canada and around the world, sign up for breaking news alerts delivered directly to you when they happen.

Get breaking National news

For news impacting Canada and around the world, sign up for breaking news alerts delivered directly to you when they happen.

“Decisions affecting Canadians must reflect Canadian laws, Canadian standards, and Canadian expertise,” he said.

“All options remain on the table as we assess what further steps may be necessary. Public safety must come first.”

Solomon and other federal ministers expressed frustration with OpenAI after the company did not present an action plan during a meeting in Ottawa on Tuesday.

The ministers said they would give OpenAI a chance to come back with one before considering a legislative response to the issue of how AI companies handle and address users’ violent behaviour.


Click to play video: 'OpenAI representatives summoned to Ottawa over Tumbler Ridge shooting'


OpenAI representatives summoned to Ottawa over Tumbler Ridge shooting


Researchers and opposition MPs have urged the federal government to speed up efforts to regulate the AI industry in the wake of the Tumbler Ridge shooting.

Story continues below advertisement

OpenAI acknowledged on Thursday that, if it had detected Jesse VanRootselaar’s ChatGPT activity today, it would have flagged it to law enforcement under its current police referral thresholds, which were updated “several months ago.”

Instead, that activity was only referred to RCMP after the shooting occurred.

It also revealed that it found a second ChatGPT account linked to VanRootselaar after she was identified as the shooter in Tumbler Ridge — despite her first account being shut down last June due to “violent” activity and a system meant to detect repeat violators of OpenAI’s policies.


The company committed to further enhancing both of those protocols, as well as establishing direct points of contact with Canadian authorities and developing better practices of connecting users to local mental health supports if they exhibit troubling behaviour.

B.C. Premier David Eby said Thursday he will also be meeting with Altman, calling OpenAI’s commitments “cold comfort for the people of Tumbler Ridge.”

He told reporters Friday in Vancouver there is no firm date yet for the meeting with the CEO, who has yet to comment publicly on the Tumbler Ridge tragedy or the changes his company says it will make in Canada.

“I want to recognize that OpenAI did come forward,” Eby said. “They did bring the information forward to police. They didn’t try to cover it up after the fact, but this was a colossal, horrific mistake, I guess, is the most generous interpretation I can offer, to fail to bring that information forward to authorities.

Story continues below advertisement

“It’s important that Mr. Altman realizes that, and I will be looking for his support for a national standard across Canada, a national threshold where all AI companies must report — and clear consequences for if they fail to report — incidents where people are planning violence, planning to hurt other people, and using these tools to develop those plans.”

—with files from the Canadian Press

&copy 2026 Global News, a division of Corus Entertainment Inc.


Minister ‘disappointed’ in OpenAI, but why is AI regulation taking years? | Globalnews.ca


Federal ministers who met with representatives of OpenAI expressed disappointment Wednesday that the company did not present steps it will take to improve its safety measures — including when police are warned of a user’s online behaviour.

OpenAI agrees to strengthen safeguards following B.C. mass shooting: minister – BC | Globalnews.ca

Experts in the field and opposition MPs, however, are questioning why the federal government has been slow to regulate artificial intelligence before concerns were raised this month following the Tumbler Ridge, B.C., mass shooting.

Artificial Intelligence Minister Evan Solomon said he is giving the company a chance to update him in the coming days on “concrete” actions before he and other ministers address the issue through legislation, though he noted a series of bills addressing AI safety and privacy are in the works.

“Look, we told this company we want to see some hard proposals, some concrete action,” Solomon told reporters in Ottawa while heading into a Liberal caucus meeting.

Story continues below advertisement

“We’re disappointed that by the time they came here, they did not have something more concrete to offer, but we’ll see very shortly what they have,” he added, noting that “all options” were on the table for how the government might act.

Solomon summoned representatives of the company behind ChatGPT to Ottawa after it emerged that the shooter who killed eight people in Tumbler Ridge on Feb. 10 was flagged internally last June for her activity on the AI chatbot.

OpenAI did not alert the RCMP until after the mass shooting occurred, saying the “violent” activity did not meet the internal threshold of an “imminent” threat when the account was flagged and banned over seven months prior.


Click to play video: 'AI concerns following Tumbler Ridge shooting'


AI concerns following Tumbler Ridge shooting


Justice Minister Sean Fraser, Public Safety Minister Gary Anandasangaree and Culture and Identity Minister Marc Miller — whose ministry is working on new online harms legislation — were also present at the meeting.

Story continues below advertisement

Prime Minister Mark Carney told reporters Wednesday he had not yet been briefed on the OpenAI meeting, but suggested he would be open to changes.

“I sat with the families of Tumbler Ridge, met with the first responders, saw the horror that — what happened and the pain that’s been caused,” he said.

“Obviously, anything that anyone could have done to prevent that tragedy or future tragedies must be done. We will fully explore it to the full lengths of the law and we’ll be very transparent about that process.”


Solomon and other ministers who were at the meeting said any action the government takes would focus on the threshold used to escalate concerning behaviour to law enforcement.

Get the day's top news, political, economic, and current affairs headlines, delivered to your inbox once a day.

Get daily National news

Get the day’s top news, political, economic, and current affairs headlines, delivered to your inbox once a day.

“There are issues around the assessment on credibility of a threat and the imminence of a thread that in my view, if properly administered, could prevent tragedies on a go-forward basis,” Fraser said.

“The message that we delivered, in no uncertain terms, was that we have an expectation that there are going to changes implemented, and if they’re not forthcoming very quickly, the government is going be making changes.”

OpenAI told Global News Tuesday evening that the company appreciated the “frank discussion on how to prevent tragedies like this in the future.”

Story continues below advertisement

“Over the past several months, we have taken steps to strengthen our safeguards and made changes to our law enforcement referral protocol for cases involving violent activities, but the ministers underscored that Canadians expect continued concrete action and we heard that message loud and clear,” a spokesperson said.

“We’ve committed to follow up in the coming days with an update on additional steps we’re taking, as we continue to support law enforcement and work with the government on strengthening AI safety for all Canadians.”

OpenAI did not detail exactly what changes have been made in recent months, and did not immediately respond to Global News’ request for comment Wednesday.

Why aren’t any new rules in place?

Researchers who study online harms and AI say the Tumbler Ridge incident shows the AI industry shouldn’t be left to regulate itself, and that the government needs to be more proactive.

Story continues below advertisement

“The ministers ought to be looking at themselves as the ones who are responsible for undertaking regulation seriously when it comes to ChatGPT and other similar tools,” said Jennifer Raso, an assistant professor in law at McGill University.

“Pulling people up to Ottawa after one of the most horrible mass shootings in Canada to have them account for themselves after the harm’s been done seems to be too little, too late.”

Conservative MP Michelle Rempel Garner said she is “very concerned about the government’s capacity and willingness to address artificial intelligence policy writ large” and the pace of progress, noting no meaningful regulations have been enacted since ChatGPT emerged in 2022.

“I certainly don’t see it as a front-burner issue,” she told reporters ahead of question period Wednesday.

“I am calling on the government to take this issue a little more seriously, to be less reactive, and to restate that Conservatives are willing to collaborate with the government on smart policy and certainly discussions on the topic at least.”

NDP interim leader Don Davies told Global News in an interview that the government’s pace has been “glacial.”

“AI isn’t new. Online harms and threats and all sorts of intimidation and disclosure of intimate pictures, this is not new. This has been going on for years and the government has been fully aware of it,” he said.

Story continues below advertisement

“Where they’ve been absolutely, I think, negligent is in acting.”

Efforts to regulate the AI industry and address online harms through legislation died in Parliament last year ahead of the federal election.

The Artificial Intelligence and Data Act would have required AI companies to ensure its platforms are monitored for safety concerns and misuse, while enacting “proactive” measures to prevent real-world harm.

Fraser introduced legislation late last year that would crack down on the sharing on non-consensual sexualized deepfake images generated by AI, following similar bills enacted by provinces like British Columbia.


Click to play video: 'OpenAI summoned to Ottawa over Tumbler Ridge shooting'


OpenAI summoned to Ottawa over Tumbler Ridge shooting


Solomon has promised to unveil a new federal AI strategy in the first quarter of this year, delaying its launch from late 2025.

In a speech last year, he said Ottawa would avoid “over-indexing on warnings and regulation,” reflecting the Carney government’s emphasis on AI’s economic benefits and speedy adoption of the technology.

Story continues below advertisement

A summary of public comments submitted during consultation on the forthcoming strategy showed Canadians are deeply skeptical of AI and want to see government regulation, particularly addressing online harms and mental health concerns.

While allies like the United Kingdom and European Union have moved to strengthen AI regulation, attempts to do so in the U.S. have been sporadic. U.S. President Donald Trump has ordered states not to pass regulations before a national strategy is in place, but that federal standard has yet to emerge.

Canada’s privacy legislation says private companies “may” — not must — disclose personal information to authorities or another organization if they believe there is a risk of significant harm or that a law will be broken.

Any further decision-making is up to the company itself, leading to internal thresholds like OpenAI’s “imminent” threat identification.

Solomon said Wednesday that work is underway to update the Personal Information Protection and Electronic Documents Act, but did not say when it will be tabled or offer further details.

Anandasangaree expressed confidence that the investigation into the shooting will yield answers, including from OpenAI.

“The number of issues arising around Tumbler Ridge concern me,” he told reporters after Wednesday’s caucus meeting.

“Yesterday’s meeting was a critical first step with OpenAI. There’s still a lot of unanswered questions, and there’s certainly a sense of frustration and, frankly, a sense that tech companies overall are not doing enough to address the issues around information that they hold.”

Story continues below advertisement

Solomon emphasized that the government wants to make sure what happened in Tumbler Ridge “does not happen again.”

“Of course a failure occurred here,” he said. “I mean, look what happened.”

—with files from Global’s Touria Izri


OpenAI’s handling of Tumbler Ridge shooter info opens regulation questions | Globalnews.ca


Scrutiny over how OpenAI handled information about the Tumbler Ridge, B.C., mass shooter months before the deadly tragedy provides an opportunity for Canada to consider regulating artificial intelligence companies to inform police in similar scenarios, experts say.

OpenAI agrees to strengthen safeguards following B.C. mass shooting: minister – BC | Globalnews.ca

The company behind ChatGPT confirmed last week it “proactively” identified and banned an account associated with Jesse Van Rootselaar in June 2025 for misusing the AI chatbot “in furtherance of violent activities.”

However, it did not inform police at that time because the activity did not meet the higher internal threshold of an “imminent” threat.

OpenAI ultimately contacted RCMP after police say 18-year-old Van Rootselaar killed eight people and wounded 25 others on Feb. 10, before taking her own life.

Artificial Intelligence Minister Evan Solomon summoned representatives to Ottawa on Tuesday to discuss the situation and the company’s safety practices.

Story continues below advertisement

Solomon told reporters Tuesday before the meeting that “all options are on the table when it comes to understanding what we can do about AI chatbots.”

Heritage Minister Marc Miller, whose ministry is working with Solomon’s to develop online safety legislation that would cover AI platforms, said the government is taking the time to get that bill right and wouldn’t tie it to what happened in Tumbler Ridge.

“I think there is the need to have legislation to make sure that platforms are behaving responsibly,” he said. “What that looks like is still to be determined, and I can’t discuss timelines with you on that.

“I think in this situation, there is legitimate thirst for easier answers, but I don’t think there are easy answers in this case, particularly with an open investigation. But … we need better answers than the ones we’ve gotten so far.”


Click to play video: 'AI concerns following Tumbler Ridge shooting'


AI concerns following Tumbler Ridge shooting


Canada’s privacy legislation says private companies “may” — not must — disclose personal information to authorities or another organization if they believe there is a risk of significant harm or that a law will be broken.

Story continues below advertisement

Any further decision-making is up to the company itself, leading to internal thresholds like OpenAI’s “imminent” threat identification.

“This is yet another sign that there is a risk with letting OpenAI and other AI developers decide for themselves what is an appropriate safety framework,” said Vincent Paquin, an assistant professor of psychiatry at McGill University who researches the relationship between digital technologies and the mental health of young people.


“Ultimately, ChatGPT is a commercial product. It’s not an approved health-care device. And so it is concerning to see that there are increasing amount of people turning to ChatGPT and other AI products for mental health support and for sensitive discussions about things going on in their lives, without having a clear understanding of the safety of those interactions and the safety mechanisms that are in place.”

For news impacting Canada and around the world, sign up for breaking news alerts delivered directly to you when they happen.

Get breaking National news

For news impacting Canada and around the world, sign up for breaking news alerts delivered directly to you when they happen.

The revelations come as OpenAI and other AI chatbot makers face multiple lawsuits in the U.S. over allegations their platforms helped drive young people to suicide and self-harm.

OpenAI denies those allegations and says that its safety evaluations refuse most, if not all, requests for harmful content like hateful and violent rhetoric and advice, including suicidal ideation.

The Wall Street Journal, which first reported OpenAI’s prior knowledge of Van Rootselaar’s ChatGPT activity, said her posts “described scenarios involving gun violence over the course of several days,” according to people familiar with the matter.

Story continues below advertisement

The report said company employees were alarmed by the posts and wrestled with whether to alert police last summer, before the company opted not to.

Global News has not independently verified the details in the report.

The B.C. government said in a statement Saturday that OpenAI officials met with a government representative on Feb. 11 — the day after the shooting — for “a meeting scheduled weeks in advance” to discuss the possibility of opening OpenAI’s first Canadian office.

“OpenAI did not inform any member of government that they had potential evidence regarding the shootings in Tumbler Ridge,” the government said, but noted OpenAI requested contact information for the RCMP from the province on Feb. 12.


Click to play video: 'OpenAI summoned to Ottawa over Tumbler Ridge shooting'


OpenAI summoned to Ottawa over Tumbler Ridge shooting


Canada’s privacy commissioner, Philippe Dufresne, has previously said not having a Canadian business office to contact makes it more difficult for his agency to investigate tech companies like TikTok.

Story continues below advertisement

Brian McQuinn, an associate professor at the University of Regina and co-director of the Centre for Artificial Intelligence, Data, and Conflict, said the tech industry in general has deprioritized internal safety regulation ever since Elon Musk took over Twitter in 2022, rebranding it as X.

“Basically (after he) fired all the teams doing that kind of work, the other (social media) companies sort of followed suit and realized they could get away with it, too,” he said. “So less staff overhead and fewer headaches being created by your own staff by letting you know things.

“If you don’t know, then you can’t be held responsible.”

Dufresne’s office has launched an investigation into Musk-owned xAI and its Grok chatbot, which is built into the X social media platform, over allegations it facilitated the spread of non-consensual sexualized deepfake images of women and children. Other companies and U.S. states are conducting similar probes.

Musk has criticized the investigations as attempts to stifle free speech and expression.

Sharon Bauer, a privacy lawyer and AI governance strategist based in Toronto, said it’s important for any future legislation or regulation to strike the “fine balance” between individual privacy with the duty to warn of potential threats.

She said the term “imminent” is key.

“That is a really important threshold, because anything lower than that threshold would mean that they would be notifying law enforcement of things that may end up stigmatizing people or creating false positives, which would of course harm those individuals,” she said.

Story continues below advertisement

At the same time, Bauer added, “anything too high would mean missing genuine threats, which may have been the case in this situation.”

“I’m hoping that we’ll get answers about this, if they documented their reasoning about why they didn’t contact law enforcement, and that’s going to be really important to analyze and figure out if they made that right decision,” she said.


Click to play video: 'Fresh questions about Tumbler Ridge tragedy'


Fresh questions about Tumbler Ridge tragedy


McQuinn said he also wants to see data about who has been kicked off AI chatbot and social media platforms for threatening to harm themselves or others, and whether there was any real world follow-up on those individuals.

“If the answer’s no, then they are just putting their heads in the sand,” he said.

“These companies (are worth) trillions of dollars, so the amount of money they spend on anything related to staffing and safety is negligible.”

Story continues below advertisement

He added that Canada’s forthcoming AI strategy needs to pair economic benefits and adoption strategies with robust safety protocols that answer these critical questions.

Paquin cited a recent California law, which requires large AI companies like OpenAI to report to the state any instances of their platforms being used for potentially “catastrophic” activities, as something Canada should model its own potential regulation after.

However, that law defines a catastrophic risk as something that would cause at least $1 billion in damage or more than 50 injuries or deaths.

The law has been praised by some AI companies like Anthropic for balancing public safety with allowing continued “innovation.”

“We should ask for more transparency and we should also think about a way of having an external oversight over those activities, because we cannot let the AI developers be their own judge, the judge of their own safety,” Paquin said.

—with files from Global’s Touria Izri


Federal government raises concerns over OpenAI safety measures after B.C. tragedy | Globalnews.ca


Canada’s minister of Artificial Intelligence says Ottawa is seeking answers from OpenAI and other artificial intelligence platforms following the deadly shootings in Tumbler Ridge, B.C.

OpenAI agrees to strengthen safeguards following B.C. mass shooting: minister – BC | Globalnews.ca

“The horrifying tragedy in Tumbler Ridge has left families with unthinkable losses and shaken communities across Canada,” Evan Solomon said in a statement on Saturday.

“Like many Canadians, I am deeply disturbed by reports that concerning online activity from the suspect was not reported to law enforcement in a timely matter.”

Recent statements from the company OpenAI confirmed that the shooter was flagged while using the platform ChatGPT last summer.

Solomon said Canadians expect online platforms, including OpenAI, to have “robust safety protocols and escalation practices” to help protect public safety.

Premier David Eby of B.C said reports alleging OpenAI may have had related intelligence prior to the attack are “profoundly disturbing.”

Story continues below advertisement

“We have confirmed with police that they are pursuing orders regarding the preservation of any potential evidence related to the shootings in Tumbler Ridge held by digital services companies, including social media platforms and AI companies,” Eby said in a statement Saturday.

OpenAI has previously said it contacted police following the incident and removed an account associated with the suspect for violating its policies.

“The pain that these families have gone through is unimaginable,” Eby added.

The premier is urging anyone with new information to contact authorities.


The province also provided background on its prior interactions with OpenAI.

For news impacting Canada and around the world, sign up for breaking news alerts delivered directly to you when they happen.

Get breaking National news

For news impacting Canada and around the world, sign up for breaking news alerts delivered directly to you when they happen.

According to the statement, a government representative met with company officials on Feb. 11 — a meeting scheduled weeks in advance regarding OpenAI’s potential interest in opening an office in Canada.

The following day, OpenAI requested contact information for the RCMP. That request was forwarded to the director of policing and law enforcement services, who connected the company with police.

“OpenAI did not inform any member of government that they had potential evidence regarding the shootings in Tumbler Ridge,” the statement said.

Laura Huey, a professor of sociology at Western University in London, Ontario, said the company’s actions were not unexpected.

Story continues below advertisement

“I can’t say that I was particularly surprised. People are increasingly using AI-based apps for all sorts of things, including psychological counselling, dating advice and, of course, unfortunately, things like how to take one’s own life as well as how to commit violence against others,” Huey told Global News.

Huey said debates about privacy and law enforcement access to digital platforms are long-standing.

“What’s happening is the technology is far outpacing the ability of law enforcement to keep an eye on it, and therefore we rely really heavily on commercial companies to do what is in the best interest of individuals and the public.”

She noted that companies face competing pressures when deciding whether to alert authorities.

“ChatGPT and other apps are run by commercial entities that at the end of the day, their interest is protecting their assets and their business.”

Huey said clearer national rules may be needed to address potential gaps.

This development comes as RCMP say they are investigating online threats that have circulated online, forcing the cancellation of a funeral service for one of the victims of the shooting.

In an emailed statement, police confirmed they are aware of threats toward the family of one of the students ahead of a planned funeral service, and that safety measures have been implemented while they investigate.

Story continues below advertisement

“The RCMP is aware of threats that have circulated online and within the community and we can confirm that an investigation is under way,” Staff Sgt. Kris Clark with B.C. RCMP told Global News.

“A safety plan is in place for the individual(s) and community as the investigation continues.”

Police did not provide details about the nature of the threats but said officers have been working with local officials.

Global News has requested comment regarding the status of the funeral service. At the time of publication, it was not clear whether the service would proceed as planned.

RCMP say their investigations into the threats and the shootings remain ongoing.


Click to play video: 'Questions over return to learning for Tumbler Ridge students as community grieves'


Questions over return to learning for Tumbler Ridge students as community grieves


— with files from Global News’ Amy Judd

&copy 2026 Global News, a division of Corus Entertainment Inc.