Canada’s AI minister says Anthropic withholding Mythos is ‘responsible’ – National | Globalnews.ca


Artificial Intelligence Minister Evan Solomon met on Tuesday with representatives of Anthropic — the company that said its latest chatbot, Mythos, is too risky for public release last week.

Canada’s AI minister says Anthropic withholding Mythos is ‘responsible’ – National | Globalnews.ca

Anthropic released a system preview card surrounding its newest AI model, Claude Mythos, which the company stated was “substantially beyond those of any model we have previously trained” and therefore would not be released to the public, citing cybersecurity dangers.

Get breaking Canada news delivered to your inbox as it happens so you won't miss a trending story.

Get breaking National news

Get breaking Canada news delivered to your inbox as it happens so you won’t miss a trending story.

“Anthropic and the Canadian government are engaged in constructive, ongoing discussions,” Solomon said in an emailed statement from his office to Global News.

“I met with Anthropic this morning as part of our continued engagement with leading AI companies on safety, security, and Canada’s sovereign interests. The Government of Canada takes the protection of its systems, its critical infrastructure, and Canadians’ data with the utmost seriousness.

Story continues below advertisement

Solomon stated that “Anthropic’s approach of working with defenders first, rather than releasing this new model broadly, is the responsible path and gives people protecting critical systems a head start.”


Concerns about rapid AI development and ransomware have grown significantly amongst Canadians.

A January 2026 federal report by the Canadian Centre for Cyber Security stated that numerous Canadian organizations and businesses, “regardless of size or sector,” as well as individuals, are susceptible to ransomware attacks.

However, “critical infrastructure and large corporations” were found to be the top targets for ransomware activities.

&copy 2026 Global News, a division of Corus Entertainment Inc.


AI minister wants more clarity on OpenAI’s changes after Tumbler Ridge | Globalnews.ca


Artificial Intelligence Minister Evan Solomon says he wants more clarity on OpenAI’s committed safety protocol changes after the Tumbler Ridge, B.C., mass shooting, and isn’t ruling out legislative changes to address the issue.

Canada’s AI minister says Anthropic withholding Mythos is ‘responsible’ – National | Globalnews.ca

The company behind ChatGPT on Thursday said it would enhance its police referral and repeat offender detection practices, after it did not elevate the shooter’s AI chatbot activity to police months before she killed eight people and wounded dozens of others.

In a statement Friday, Solomon said OpenAI’s statement did not include “a detailed plan for how these commitments will be implemented in practice.”

He said he would be meeting with CEO Sam Altman next week to “seek further clarity” and assurances of “concrete action.”

“The tragedy in Tumbler Ridge has raised serious questions about how digital platforms respond when credible warning signs of violence emerge,” the minister said. “Canadians deserve greater clarity about how human review decisions are made, how escalation thresholds are applied, and how privacy considerations are balanced with public safety.

Story continues below advertisement

“We will be seeking further clarity on how human review is conducted and whether Canadian context and best practices are appropriately embedded in those decisions. I will also be consulting with my cabinet colleagues on additional options.”

Solomon added he would also be meeting with other AI companies in the coming weeks “to ensure there is a consistent and clear approach to escalation, local coordination, and youth protection.”

For news impacting Canada and around the world, sign up for breaking news alerts delivered directly to you when they happen.

Get breaking National news

For news impacting Canada and around the world, sign up for breaking news alerts delivered directly to you when they happen.

“Decisions affecting Canadians must reflect Canadian laws, Canadian standards, and Canadian expertise,” he said.

“All options remain on the table as we assess what further steps may be necessary. Public safety must come first.”

Solomon and other federal ministers expressed frustration with OpenAI after the company did not present an action plan during a meeting in Ottawa on Tuesday.

The ministers said they would give OpenAI a chance to come back with one before considering a legislative response to the issue of how AI companies handle and address users’ violent behaviour.


Click to play video: 'OpenAI representatives summoned to Ottawa over Tumbler Ridge shooting'


OpenAI representatives summoned to Ottawa over Tumbler Ridge shooting


Researchers and opposition MPs have urged the federal government to speed up efforts to regulate the AI industry in the wake of the Tumbler Ridge shooting.

Story continues below advertisement

OpenAI acknowledged on Thursday that, if it had detected Jesse VanRootselaar’s ChatGPT activity today, it would have flagged it to law enforcement under its current police referral thresholds, which were updated “several months ago.”

Instead, that activity was only referred to RCMP after the shooting occurred.

It also revealed that it found a second ChatGPT account linked to VanRootselaar after she was identified as the shooter in Tumbler Ridge — despite her first account being shut down last June due to “violent” activity and a system meant to detect repeat violators of OpenAI’s policies.


The company committed to further enhancing both of those protocols, as well as establishing direct points of contact with Canadian authorities and developing better practices of connecting users to local mental health supports if they exhibit troubling behaviour.

B.C. Premier David Eby said Thursday he will also be meeting with Altman, calling OpenAI’s commitments “cold comfort for the people of Tumbler Ridge.”

He told reporters Friday in Vancouver there is no firm date yet for the meeting with the CEO, who has yet to comment publicly on the Tumbler Ridge tragedy or the changes his company says it will make in Canada.

“I want to recognize that OpenAI did come forward,” Eby said. “They did bring the information forward to police. They didn’t try to cover it up after the fact, but this was a colossal, horrific mistake, I guess, is the most generous interpretation I can offer, to fail to bring that information forward to authorities.

Story continues below advertisement

“It’s important that Mr. Altman realizes that, and I will be looking for his support for a national standard across Canada, a national threshold where all AI companies must report — and clear consequences for if they fail to report — incidents where people are planning violence, planning to hurt other people, and using these tools to develop those plans.”

—with files from the Canadian Press

&copy 2026 Global News, a division of Corus Entertainment Inc.