Burger King has come under fire for new tech it’s trialling (Picture: NIKLAS HALLE’N/AFP via Getty Images)
Burger King fans have been left feeling like they’re living in an episode of Black Mirror, after learning about a new technology the chain is currently testing.
Restaurant Brands International, the owner of Burger King, confirmed this week that it’s trialling an OpenAI-powered chatbot inside headsets across 500 restaurants in the US, with a plan to later roll it out nationwide.
The AI chatbot, known as ‘Patty’, can talk to employees through the headsets, and is intended to be a ‘coaching tool’, according to Thibault Roux, Burger King’s chief digital officer in the US and Canada.
Patty will combine data across several aspects of the business, including drive-thru conversations, stock levels, and kitchen equipment. Staff will be able to ask the chatbot questions, such as how to make burgers and for instructions on cleaning equipment like the milkshake machine.
It is also being trained to ‘measure friendliness’ by recognising certain words such as ‘please’, ‘thank you’ and ‘welcome to Burger King’ and the chain is said to be looking into ‘capturing the tone of conversations’ too, according to The Verge.
The AI chatbot is in the headsets (Picture: Bloomberg via Getty Images)
Other functions include the ability to alert employees to issues, such as a drinks machine being low on Coca-Cola, and flagging issues that customers have reported, such as messy toilets.
And as Patty is Integrated with Burger King’s cloud-based point-of-sale system, it can completely remove a product that’s not available from all digital menus and kiosks within 15 minutes, to avoid customer disappointment.
Roux claims the technology is something Burger King is ‘tinkering with’ but acknowledges it’s a ‘risky bet’, as it’s not something ‘every guest is ready for’.
And he’s certainly not wrong.
On social media there’s been quite a lot of backlash to the trial already, with Facebook users branding it ‘dystopian’, comparing it to something out of a ‘Black Mirror’ episode, and claiming it’s made them feel as if they are ‘living in hell’.
The AI chatbot is being tested at 500 restaurants across the US (Picture: Getty Images)
Following this, Burger King has reiterated that Patty is intended as a ‘coaching tool’ and not a way for the company to ‘track or evaluate staff saying specific words or phrases’.
A spokesperson for the company told The Grocer: ‘BK Assistant is a coaching and operational support tool built to help our restaurant teams manage complexity and stay focused on delivering a great guest experience.
‘It’s not about scoring individuals or enforcing scripts. It’s about reinforcing great hospitality and giving managers helpful, real-time insights so they can recognise their teams more effectively.’
It has not yet been confirmed whether Burger King UK could start utilising this technology. Metro has contacted the fast food chain for further comment.
This isn’t the first time a fast food chain has tested out AI, with both Taco Bell and McDonald’s previously introducing AI into their drive-thrus in the US.
Neither trial has proved overly successful, though, with McDonald’s removing AI-powered voice ordering from more than 100 locations in July 2024, after several errors were made. This included customers being given multiple unwanted items, and some unusual orders like bacon on ice cream.
Taco Bell first introduced AI to 500 drive-thrus in 2023, but has since reportedly slowed down the US-wide rollout of the technology, after experiencing similar issues.
Customers have complained on social media about mistakes and glitches with the tech, while others have tried to prank it, with one person notably trying to see what would happen if they ordered 18,000 cups of water.
Spoiler alert: It did not end well.
Do you have a story to share?
Get in touch by emailing MetroLifestyleTeam@Metro.co.uk.
MORE: US Airport bans crocs and pyjamas before claiming it was a ‘joke’
MORE: ‘Etsy shut my shop and withdrew £130 from me – all because I won’t give them my ID’
MORE: Meta admits bug let its AI give away your items for free
Federal ministers who met with representatives of OpenAI expressed disappointment Wednesday that the company did not present steps it will take to improve its safety measures — including when police are warned of a user’s online behaviour.
Experts in the field and opposition MPs, however, are questioning why the federal government has been slow to regulate artificial intelligence before concerns were raised this month following the Tumbler Ridge, B.C., mass shooting.
Artificial Intelligence Minister Evan Solomon said he is giving the company a chance to update him in the coming days on “concrete” actions before he and other ministers address the issue through legislation, though he noted a series of bills addressing AI safety and privacy are in the works.
“Look, we told this company we want to see some hard proposals, some concrete action,” Solomon told reporters in Ottawa while heading into a Liberal caucus meeting.
Story continues below advertisement
“We’re disappointed that by the time they came here, they did not have something more concrete to offer, but we’ll see very shortly what they have,” he added, noting that “all options” were on the table for how the government might act.
Solomon summoned representatives of the company behind ChatGPT to Ottawa after it emerged that the shooter who killed eight people in Tumbler Ridge on Feb. 10 was flagged internally last June for her activity on the AI chatbot.
OpenAI did not alert the RCMP until after the mass shooting occurred, saying the “violent” activity did not meet the internal threshold of an “imminent” threat when the account was flagged and banned over seven months prior.
AI concerns following Tumbler Ridge shooting
Justice Minister Sean Fraser, Public Safety Minister Gary Anandasangaree and Culture and Identity Minister Marc Miller — whose ministry is working on new online harms legislation — were also present at the meeting.
Story continues below advertisement
Prime Minister Mark Carney told reporters Wednesday he had not yet been briefed on the OpenAI meeting, but suggested he would be open to changes.
“I sat with the families of Tumbler Ridge, met with the first responders, saw the horror that — what happened and the pain that’s been caused,” he said.
“Obviously, anything that anyone could have done to prevent that tragedy or future tragedies must be done. We will fully explore it to the full lengths of the law and we’ll be very transparent about that process.”
Solomon and other ministers who were at the meeting said any action the government takes would focus on the threshold used to escalate concerning behaviour to law enforcement.
Get daily National news
Get the day’s top news, political, economic, and current affairs headlines, delivered to your inbox once a day.
“There are issues around the assessment on credibility of a threat and the imminence of a thread that in my view, if properly administered, could prevent tragedies on a go-forward basis,” Fraser said.
“The message that we delivered, in no uncertain terms, was that we have an expectation that there are going to changes implemented, and if they’re not forthcoming very quickly, the government is going be making changes.”
OpenAI told Global News Tuesday evening that the company appreciated the “frank discussion on how to prevent tragedies like this in the future.”
Story continues below advertisement
“Over the past several months, we have taken steps to strengthen our safeguards and made changes to our law enforcement referral protocol for cases involving violent activities, but the ministers underscored that Canadians expect continued concrete action and we heard that message loud and clear,” a spokesperson said.
“We’ve committed to follow up in the coming days with an update on additional steps we’re taking, as we continue to support law enforcement and work with the government on strengthening AI safety for all Canadians.”
OpenAI did not detail exactly what changes have been made in recent months, and did not immediately respond to Global News’ request for comment Wednesday.
Why aren’t any new rules in place?
Researchers who study online harms and AI say the Tumbler Ridge incident shows the AI industry shouldn’t be left to regulate itself, and that the government needs to be more proactive.
Story continues below advertisement
“The ministers ought to be looking at themselves as the ones who are responsible for undertaking regulation seriously when it comes to ChatGPT and other similar tools,” said Jennifer Raso, an assistant professor in law at McGill University.
“Pulling people up to Ottawa after one of the most horrible mass shootings in Canada to have them account for themselves after the harm’s been done seems to be too little, too late.”
Conservative MP Michelle Rempel Garner said she is “very concerned about the government’s capacity and willingness to address artificial intelligence policy writ large” and the pace of progress, noting no meaningful regulations have been enacted since ChatGPT emerged in 2022.
“I certainly don’t see it as a front-burner issue,” she told reporters ahead of question period Wednesday.
“I am calling on the government to take this issue a little more seriously, to be less reactive, and to restate that Conservatives are willing to collaborate with the government on smart policy and certainly discussions on the topic at least.”
Trending Now
As Carney heads to India, Canada seeks to revoke citizenship of 2008 Mumbai attack ‘mastermind’
Cuba’s ambassador says U.S. ‘suffocating’ people, calls on Canada for aid
NDP interim leader Don Davies told Global News in an interview that the government’s pace has been “glacial.”
“AI isn’t new. Online harms and threats and all sorts of intimidation and disclosure of intimate pictures, this is not new. This has been going on for years and the government has been fully aware of it,” he said.
Story continues below advertisement
“Where they’ve been absolutely, I think, negligent is in acting.”
Efforts to regulate the AI industry and address online harms through legislation died in Parliament last year ahead of the federal election.
The Artificial Intelligence and Data Act would have required AI companies to ensure its platforms are monitored for safety concerns and misuse, while enacting “proactive” measures to prevent real-world harm.
Fraser introduced legislation late last year that would crack down on the sharing on non-consensual sexualized deepfake images generated by AI, following similar bills enacted by provinces like British Columbia.
OpenAI summoned to Ottawa over Tumbler Ridge shooting
Solomon has promised to unveil a new federal AI strategy in the first quarter of this year, delaying its launch from late 2025.
In a speech last year, he said Ottawa would avoid “over-indexing on warnings and regulation,” reflecting the Carney government’s emphasis on AI’s economic benefits and speedy adoption of the technology.
Story continues below advertisement
A summary of public comments submitted during consultation on the forthcoming strategy showed Canadians are deeply skeptical of AI and want to see government regulation, particularly addressing online harms and mental health concerns.
While allies like the United Kingdom and European Union have moved to strengthen AI regulation, attempts to do so in the U.S. have been sporadic. U.S. President Donald Trump has ordered states not to pass regulations before a national strategy is in place, but that federal standard has yet to emerge.
Canada’s privacy legislation says private companies “may” — not must — disclose personal information to authorities or another organization if they believe there is a risk of significant harm or that a law will be broken.
Any further decision-making is up to the company itself, leading to internal thresholds like OpenAI’s “imminent” threat identification.
Solomon said Wednesday that work is underway to update the Personal Information Protection and Electronic Documents Act, but did not say when it will be tabled or offer further details.
Anandasangaree expressed confidence that the investigation into the shooting will yield answers, including from OpenAI.
“The number of issues arising around Tumbler Ridge concern me,” he told reporters after Wednesday’s caucus meeting.
“Yesterday’s meeting was a critical first step with OpenAI. There’s still a lot of unanswered questions, and there’s certainly a sense of frustration and, frankly, a sense that tech companies overall are not doing enough to address the issues around information that they hold.”
Story continues below advertisement
Solomon emphasized that the government wants to make sure what happened in Tumbler Ridge “does not happen again.”
“Of course a failure occurred here,” he said. “I mean, look what happened.”
Scrutiny over how OpenAI handled information about the Tumbler Ridge, B.C., mass shooter months before the deadly tragedy provides an opportunity for Canada to consider regulating artificial intelligence companies to inform police in similar scenarios, experts say.
The company behind ChatGPT confirmed last week it “proactively” identified and banned an account associated with Jesse Van Rootselaar in June 2025 for misusing the AI chatbot “in furtherance of violent activities.”
However, it did not inform police at that time because the activity did not meet the higher internal threshold of an “imminent” threat.
OpenAI ultimately contacted RCMP after police say 18-year-old Van Rootselaar killed eight people and wounded 25 others on Feb. 10, before taking her own life.
Artificial Intelligence Minister Evan Solomon summoned representatives to Ottawa on Tuesday to discuss the situation and the company’s safety practices.
Story continues below advertisement
Solomon told reporters Tuesday before the meeting that “all options are on the table when it comes to understanding what we can do about AI chatbots.”
Heritage Minister Marc Miller, whose ministry is working with Solomon’s to develop online safety legislation that would cover AI platforms, said the government is taking the time to get that bill right and wouldn’t tie it to what happened in Tumbler Ridge.
“I think there is the need to have legislation to make sure that platforms are behaving responsibly,” he said. “What that looks like is still to be determined, and I can’t discuss timelines with you on that.
“I think in this situation, there is legitimate thirst for easier answers, but I don’t think there are easy answers in this case, particularly with an open investigation. But … we need better answers than the ones we’ve gotten so far.”
AI concerns following Tumbler Ridge shooting
Canada’s privacy legislation says private companies “may” — not must — disclose personal information to authorities or another organization if they believe there is a risk of significant harm or that a law will be broken.
Story continues below advertisement
Any further decision-making is up to the company itself, leading to internal thresholds like OpenAI’s “imminent” threat identification.
“This is yet another sign that there is a risk with letting OpenAI and other AI developers decide for themselves what is an appropriate safety framework,” said Vincent Paquin, an assistant professor of psychiatry at McGill University who researches the relationship between digital technologies and the mental health of young people.
“Ultimately, ChatGPT is a commercial product. It’s not an approved health-care device. And so it is concerning to see that there are increasing amount of people turning to ChatGPT and other AI products for mental health support and for sensitive discussions about things going on in their lives, without having a clear understanding of the safety of those interactions and the safety mechanisms that are in place.”
Get breaking National news
For news impacting Canada and around the world, sign up for breaking news alerts delivered directly to you when they happen.
The revelations come as OpenAI and other AI chatbot makers face multiple lawsuits in the U.S. over allegations their platforms helped drive young people to suicide and self-harm.
The report said company employees were alarmed by the posts and wrestled with whether to alert police last summer, before the company opted not to.
Global News has not independently verified the details in the report.
The B.C. government said in a statement Saturday that OpenAI officials met with a government representative on Feb. 11 — the day after the shooting — for “a meeting scheduled weeks in advance” to discuss the possibility of opening OpenAI’s first Canadian office.
“OpenAI did not inform any member of government that they had potential evidence regarding the shootings in Tumbler Ridge,” the government said, but noted OpenAI requested contact information for the RCMP from the province on Feb. 12.
OpenAI summoned to Ottawa over Tumbler Ridge shooting
Canada’s privacy commissioner, Philippe Dufresne, has previously said not having a Canadian business office to contact makes it more difficult for his agency to investigate tech companies like TikTok.
Story continues below advertisement
Brian McQuinn, an associate professor at the University of Regina and co-director of the Centre for Artificial Intelligence, Data, and Conflict, said the tech industry in general has deprioritized internal safety regulation ever since Elon Musk took over Twitter in 2022, rebranding it as X.
Trending Now
Kendamil baby formula sold at Costco recalled due to toxin concerns
Who was ‘El Mencho,’ the Mexican drug lord whose death sparked violence?
“Basically (after he) fired all the teams doing that kind of work, the other (social media) companies sort of followed suit and realized they could get away with it, too,” he said. “So less staff overhead and fewer headaches being created by your own staff by letting you know things.
“If you don’t know, then you can’t be held responsible.”
Dufresne’s office has launched an investigation into Musk-owned xAI and its Grok chatbot, which is built into the X social media platform, over allegations it facilitated the spread of non-consensual sexualized deepfake images of women and children. Other companies and U.S. states are conducting similar probes.
Musk has criticized the investigations as attempts to stifle free speech and expression.
Sharon Bauer, a privacy lawyer and AI governance strategist based in Toronto, said it’s important for any future legislation or regulation to strike the “fine balance” between individual privacy with the duty to warn of potential threats.
She said the term “imminent” is key.
“That is a really important threshold, because anything lower than that threshold would mean that they would be notifying law enforcement of things that may end up stigmatizing people or creating false positives, which would of course harm those individuals,” she said.
Story continues below advertisement
At the same time, Bauer added, “anything too high would mean missing genuine threats, which may have been the case in this situation.”
“I’m hoping that we’ll get answers about this, if they documented their reasoning about why they didn’t contact law enforcement, and that’s going to be really important to analyze and figure out if they made that right decision,” she said.
Fresh questions about Tumbler Ridge tragedy
McQuinn said he also wants to see data about who has been kicked off AI chatbot and social media platforms for threatening to harm themselves or others, and whether there was any real world follow-up on those individuals.
“If the answer’s no, then they are just putting their heads in the sand,” he said.
“These companies (are worth) trillions of dollars, so the amount of money they spend on anything related to staffing and safety is negligible.”
Story continues below advertisement
He added that Canada’s forthcoming AI strategy needs to pair economic benefits and adoption strategies with robust safety protocols that answer these critical questions.
Paquin cited a recent California law, which requires large AI companies like OpenAI to report to the state any instances of their platforms being used for potentially “catastrophic” activities, as something Canada should model its own potential regulation after.
However, that law defines a catastrophic risk as something that would cause at least $1 billion in damage or more than 50 injuries or deaths.
The law has been praised by some AI companies like Anthropic for balancing public safety with allowing continued “innovation.”
“We should ask for more transparency and we should also think about a way of having an external oversight over those activities, because we cannot let the AI developers be their own judge, the judge of their own safety,” Paquin said.
Artificial Intelligence Minister Evan Solomon says he summoned representatives from OpenAI to Ottawa to discuss safety concerns following revelations about interactions the Tumbler Ridge, B.C., shooter had with ChatGPT.
ChatGPT states that the account was suspended due to concerns about the suspect’s posts, but it did not alert law enforcement officials in Canada because it was not deemed an immediate threat.
“The horrifying tragedy in Tumbler Ridge has left families with unthinkable losses and shaken communities across Canada,” Solomon said in a statement on Saturday.
“Like many Canadians, I am deeply disturbed by reports that concerning online activity from the suspect was not reported to law enforcement in a timely matter.”
Fresh questions about Tumbler Ridge tragedy
Solomon said Canadians expect online platforms, including OpenAI, to have “robust safety protocols and escalation practices” to help protect public safety.
Story continues below advertisement
On Friday, ChatGPT confirmed that an account connected with the Tumbler Ridge shooter, Jesse VanRootelsar, was identified in June 2025 for “abuse and detection and enforcement efforts.”
Get daily National news
Get the day’s top news, political, economic, and current affairs headlines, delivered to your inbox once a day.
VanRootelsar shot and killed eight people on Feb. 10 — her mother and half-brother at her home and then five students and an educator at Tumbler Ridge Secondary School. VanRootelsar was then found dead of what appeared to be a self-inflicted gunshot wound inside the school, RCMP later confirmed.
Trending Now
26K Canadians in Mexico as cartel violence hits Puerto Vallarta: minister
Vancouver airport ties Nexus outage to U.S. partial government shutdown
“Our thoughts are with everyone affected by the Tumbler Ridge tragedy,” a spokesperson for OpenAI, which owns ChatGPT, confirmed on Friday afternoon, adding that after the incident on Feb. 10, the company contacted the RCMP.
“We proactively reached out to the Royal Canadian Mounted Police with information on the individual and their use of ChatGPT, and we’ll continue to support their investigation.”
Solomon said on Monday he is deeply disturbed by the reports of what happened with ChatGPT and VanRootelsar’s account and that he contacted the company over the weekend to get more information and to arrange a meeting in Ottawa on Tuesday.
He says he expects the company’s top safety representatives to explain its protocols and how it decides to forward cases to law enforcement.
— With files from Global News’ Prisha Dev and The Canadian Press
Would make for quite a good amusement park ride (Picture: Getty/Metro)
Scientists are considering a rather novel way to get a better look at 3I/ATLAS – a solar slingshot.
In the months since the interstellar trespasser was spotted, scientists clashed over whether it was a giant snowball (a comet) or… a UFO.
One reason for this was that even at its closest to Earth in December, 3I/ATLAS was still 167 million miles away, making observations tricky.
So, why not just send a spacecraft over there? This is what scientists are thinking could be possible by doing a rather risky rocker manoeuvre.
In a new paper, a team from the non-profit Initiative for Interstellar Studies said this would be achievable by exploiting the ‘Oberth effect’.
‘As a spacecraft is falling into the gravitational potential well, it fires its rockets, coming out of it with a greater kinetic energy,’ Dr Alfredo Carpineti, an astrophysicist who was not involved in the paper, told Metro.
Comet 3I/ATLAS was first spotted last July (Picture: International Gemini Observatory)
If successful, this would make the 3I/ATLAS interceptor the fastest spacecraft in human history.
Our interstellar visitor will do a pit-stop at Jupiter in about 20 days, marking the halfway point of its time in our cosmic neighbourhood.
The plan would also be to fly out the interstellar interceptor to the gas giant first to use its gravity to slow it down. (If it beelined to the sun, it would travel so fast that it would end up being burned out.)
Experts propose launching the probe in 2035, as it could reach 3I/ATLAS by 2085, when it would be 68 million miles away.
As elaborate as this sounds, Dr Carpineti says this is ‘the most efficient time to burn fuel’.
To achieve this, though, would involve flying just 140,000 miles from the sun’s centre, meaning the craft would need to endure searing heat.
The researchers suggest the craft could be clad in a carbon-composite and aerogel, one of the lightest materials in the world.
One thing holding the mission back is that even with the Oberth effect, the craft still wouldn’t be fast enough to get close to entering 3I/ATLAS’ orbit.
3I/ATLAS, formerly known as A11pI3Z, is only the third interstellar visitor to be discovered passing through our neck of the cosmic woods.
The first was Oumuamua, which travelled past us in 2017. In 2019, Borisov, a comet of interstellar origin, passed by.
Like Borisov, scientists believe 3I/ATLAS likely formed as a comet around another star before being flung out into the cosmos.
Dr Carpineti adds: ‘The work doesn’t look at the feasibility of the mission but just the manoeuvre.
‘Indeed, it’s possible to use this approach to catch up with the rocket.
‘But since the interstellar object is so much faster than the previous two, it would take decades.’
Get in touch with our news team by emailing us at webnews@metro.co.uk.
For more stories like this, check our news page.
MORE: Major breakthrough in 60 year hunt for first ever successful lunar lander on moon
MORE: Bizarre ‘inside-out’ solar system that shouldn’t exist discovered
MORE: We’re one step closer to knowing why there’s life on Earth – and nowhere else
It has arms, hands, eyes — of a sort — and can stand for hours doing the same task, over and over, without uttering a word of complaint.
But Toyota Canada’s latest employee is unlike any other ever to grace the floor of the company’s Woodstock, Ont., assembly plant. You see, Digit is a humanoid robot.
Following a successful pilot, the company has signed a commercial Robots-as-a-Service agreement with Oregon-based Agility Robotics to deploy its general-purpose robot at the facility. The robots will support manufacturing, supply chain and logistics operations.
Digit, a humanoid robot developed by Agility Robotics, performs material-handling tasks inside a manufacturing facility.
Agility Robotics
While seven robots are allocated under the agreement, deployment will begin with three units.
Story continues below advertisement
“After evaluating a number of robots, we are excited to deploy Digit to improve the team member experience and further increase operational efficiency in our manufacturing facilities,” Tim Hollander, president of Toyota Motor Manufacturing Canada, said in a release.
Get breaking National news
For news impacting Canada and around the world, sign up for breaking news alerts delivered directly to you when they happen.
Digit is designed to take on repetitive and physically demanding tasks commonly found on automotive production lines.
In the release, the companies said that automating “extremely repetitive and physically taxing tasks” could reduce strain and increase safety for employees, freeing them to focus on more value-added work.
Digit, a humanoid robot developed by Agility Robotics, performs material-handling tasks inside a manufacturing facility.
Agility Robotics
Trending Now
Free room and board? 60% of Canadian parents to offer it during post-secondary
Abandoned baby monkey Punch dragged by older macaque in heartbreaking video
Agility Robotics CEO Peggy Johnson said partnering with Toyota, one of the world’s largest automakers, marks a significant step for humanoid robots in industrial settings.
“Toyota is one of the premier companies in the world; one with a long history of innovation and success, so it’s a privilege to join forces to integrate humanoid robotic solutions like Digit into automotive production,” Johnson said.
Story continues below advertisement
The companies say they will continue exploring additional use cases where robots and artificial intelligence could further augment automotive production.
Digit, a humanoid robot developed by Agility Robotics, moves containers along a conveyor inside an Amazon facility.
Agility Robotics
Toyota joins a growing number of Fortune 500 companies deploying Agility’s humanoid robots globally, including GXO, Schaeffler and Amazon.
Toyota Motor Manufacturing Canada operates vehicle assembly plants in Cambridge and Woodstock and is Toyota’s largest manufacturing operation outside Japan.
Less than two years ago, a federal government report warned Canada should prepare for a future where, thanks to artificial intelligence, it is “almost impossible to know what is fake or real.”
Now, researchers are warning that moment may already be here, and senior officials in Ottawa this week said the government is “very concerned” about increasingly sophisticated AI-generated content like deepfakes impacting elections.
“We are approaching that place very quickly,” said Brian McQuinn, an associate professor at the University of Regina and co-director of the Centre for Artificial Intelligence, Data and Conflict.
He added the United States could quickly become a top source of such content — a threat that could accelerate amid future independence battles in Quebec and particularly Alberta, which has already been seized on by some U.S. government and media figures.
Story continues below advertisement
“We are 100 per cent guaranteed to be getting deepfakes originating from the U.S. administration and its proxies, without question,” said McQuinn. “We already have, and it’s just the question of the volume that’s coming.”
During a House of Commons committee hearing on foreign election interference on Tuesday, Prime Minister Mark Carney’s national security and intelligence advisor Nathalie Drouin said Canada expects the U.S., like all other foreign nations, to stay out of its domestic political affairs.
That came in response to the lone question from MPs about the possibility of the U.S. becoming a foreign interference threat on par with Russia, China or India.
The rest of the two-hour hearing focused on the previous federal election and whether Ottawa is prepared for future threats, including AI and disinformation.
“I do know that the government is very concerned about AI and the potentially pernicious effects,” said deputy foreign affairs minister David Morrison, who, like Drouin, is a member of the Critical Election Incident Public Protocol Panel tasked with warning Canadians about interference.
Canadian governments should regulate AI, 85% of Canadians say: poll
Asked if Canada should seek to label AI-generated content online, Morrison said: “I don’t know whether there’s an appetite for labelling specifically,” noting that’s a decision for platforms to make.
Story continues below advertisement
“It is not easy to put the government in the position of saying what is true and what is not true,” he added.
Ottawa is currently considering legislation that will address online harms and privacy concerns related to AI, but it’s not yet clear if the bill will seek to crack down on disinformation.
“Canada is working on the safety of that new technology. We’re developing standards for AI,” said Drouin, who also serves as deputy clerk of the Privy Council.
She noted that Justice Marie-Josée Hogue, who led the public inquiry into foreign interference, concluded in her final report last year that disinformation is the greatest threat to Canadian democracy — thanks in part to the rise of generative AI.
Get breaking National news
For news impacting Canada and around the world, sign up for breaking news alerts delivered directly to you when they happen.
Addressing and combating that threat is “an endless, ongoing job,” Drouin said. “It never ends.”
The Privy Council Office told Global News it provided an “initial information session relating to deepfakes” to MPs on Wednesday, and would offer additional sessions to “all interested parliamentarians as well as to political parties over the coming weeks.”
Experts like McQuinn say such a briefing is long overdue, and that government, academia and media must also step up educating an already-skeptical Canadian public on how to discern truth from fiction.
“There should be annual training (for politicians and their staffs), not just on deepfakes and disinformation, but foreign interference altogether,” said Marcus Kolga, a senior fellow at the Macdonald-Laurier Institute and founder of DisinfoWatch.
Story continues below advertisement
“This needs leadership. Right now, I’m not seeing that leadership, but we desperately need it because all of us can see what is coming.”
Kolga also agreed there is “no doubt” that official U.S. government channels, and U.S. President Donald Trump himself, are becoming a major source of that content.
“The trajectory is rather clear,” he said. “So I think that we need to anticipate that that’s going to happen. Reacting to it after it happens isn’t all that helpful — we need to be preparing at this time.”
Threat growing from the U.S., researchers say
Morrison noted Tuesday that the elections panel, as well as the Security and Intelligence Threats to Elections (SITE) task force, did not observe any significant use of AI to interfere in last year’s federal election.
However, he added that “our adversaries in this space are continually evolving their tactics, so it’s only a matter of time, and we do need to be very vigilant.”
Researchers now say the U.S. is quickly becoming a part of that threat landscape.
McQuinn said part of the issue is the online disinformation that Canadians see is being spread primarily on American-owned social media platforms like X and Facebook, with TikTok now under U.S. ownership as well.
That has posed challenges to foreign countries trying to regulate content on those platforms, with European and British laws facing resistance and hostility by the companies and the Trump administration, which has promised severe penalties, including tariffs and even sanctions.
Digital services taxes that sought to claw back revenues for operating in foreign countries have been identified by the U.S. as trade irritants, with Canada’s tax nearly scuttling negotiations last year before it was revoked.
Trending Now
Jivani’s trip to Washington has some Conservative MPs scratching their heads
Why Canadian beer cans are ‘almost impossible’ as tariffs near 1-year mark
Kolga noted the spread of disinformation by U.S. content creators and platforms is not new, whether it originates from America or from elsewhere in the world. Other countries, including Russia, India and China, are known to use disinformation campaigns and have been identified in Canadian security reports as significant sources of foreign interference efforts.
What is new, McQuinn said, is the involvement of Trump and his administration in pushing that disinformation, including AI deepfakes.
Trump defends AI image of himself as Pope, says Melania thought it was ‘cute’
While much of the content is clearly fake or designed to illicit a reaction — a White House image showing Trump and a penguin walking through an Arctic landscape suggested to be Greenland, or Trump sharing third-party AI content depicting him flying a feces-spraying fighter jet over protesters — there have been more subtle examples.
The White House was accused last month of using AI to alter a photo of a protester arrested in Minnesota during a federal immigration crackdown in the state to make the woman appear as though she were crying.
In response to criticism over the altered image, White House deputy communications director Kaelan Dorr wrote on X, “The memes will continue.” The image remains online.
Story continues below advertisement
“The present U.S. administration is the only western country that we know of (that) on a regular basis is publishing or sharing or promoting obvious fakes and deepfakes, at a level that has never been seen by a western government before,” McQuinn said.
He said the online strategy and behaviour matches that of common state disinformation actors like Russia and China, as well as armed groups like the Taliban, which don’t have “any respect” for the truth.
“If you don’t (have that respect), then you will always have an asymmetrical advantage against any actor, whether it’s state or non-state, who wants to in some way adhere to the truth,” he said.
“(This) U.S. administration will always have an advantage over Canadian actors because they no longer have any controls on them or restraints, because truth is no longer a factor in their communication.”
Gazans react to Trump AI video promoting plan for “Riviera of the Middle East”
McQuinn added his own research suggests 83 per cent of disinformation is passed along by average Canadians who don’t immediately realize the content they’re sharing is fake.
Story continues below advertisement
“It’s not that they necessarily believe in the disinformation,” he said. “Something looks kind of catchy or aligns with their ideas of the world, and they will pass it on without reading in the second or third paragraph that the idea that they agreed with now morphs into something else.
“The good news is that Canadians are learning very quickly” how to spot things like deepfakes, he added, which is creating “a certain amount of skepticism that is naturally cropping up in the population.”
Yet Trump’s repeated sharing of AI content online that imagines U.S. control of Canada — an homage to his “51st state” threats — as well as tacit support between U.S. administration figures and the Alberta independence movement has researchers increasingly worried.
“My real concern is that when Donald Trump does order the U.S. government to start supporting some of those narratives and starts actually engaging in state disinformation, in terms of Canada’s unity — when that happens, then we’re in real trouble,” Kolga said.
Most children of secondary school age (we’re talking 12- to 15-year-olds) have a smartphone – and some of them will be allowed to have one on the condition they’re happy to give their device up every now and then for their parents to check.
But what happens if, during one of these checks, you spot something that makes your heart sink? And what about if your teen hasn’t given you permission to check their phone, but you’ve seen a notification flash up that’s left you worried?
It’s a minefield – and there’s no set rule for tackling this, as everyone’s situation will be different. That said, experts have shared their thoughts on how to approach this tricky moment, without causing a huge rift.
If you DO have consent to look at your child’s phone…
Counselling Directory member Bella Hird told HuffPost UK parents who have an agreement in place with their child where they can do spot checks “are in a very good starting place”.
“Think of your child’s phone a little as you would think of the world. They need your support to navigate it. There will be places and situations that, until they reach a certain age, you would not let them wander off into unsupervised,” she said.
If there’s a message on their phone that worries you, the therapist advises having a chat with your child about it: “Approach the conversation with your child with honesty and curiosity. So for example, explain ‘this kind of message really worries me and I want to know we are keeping you safe, can you explain to me a little about the context?’.”
She then urges parents to allow their child the space to explain. Try not to react in fear or anger as this will simply shut the conversation down. Punishments will simply drive a wedge further, too.
Richard Drury via Getty Images
Education and child psychologist Dr Sasha Hall said the key here is offering a calm and proportionate response, rather than punishment.
If messages involve adult or sexualised content, the psychologist said key considerations include: whether the material is age-appropriate; whether there is any risk, pressure or coercion; and whether the young person understands boundaries and consent.
“Adolescence is a stage where children need increasing autonomy and privacy compared to earlier childhood, but this should be matched with developmentally appropriate safeguards,” she added.
“The aim is not to remove independence, but to support safe decision-making while those skills are still forming.”
Bird added that it’s important to help your child understand that it is OK to make mistakes and that being open with you will ultimately end with them feeling supported with potentially difficult or dangerous scenarios.
“Explain to your child what it is about the message or what you have seen that has concerned you and ask them if they understand your worries,” she said.
“They will probably tell you there is nothing to be concerned about, in which case ask them to explain more.”
There might be times when you think your child is in danger – for example, they are being groomed – in which case, you will need to take action. Bird said “it is really important to try to take your child on that journey with you”.
She advised: “Explain to them why you are doing what you are doing it and give them as much agency as possible – so, for example, in the case that you need to involve the police, you should explain that you need to do that and why, and let them know what is likely to happen. But give them choices like ‘would you like me to explain to them or would you like to?’ and ‘who would you like with you?’
“Avoid making them feel punished or ashamed because these experiences are a real barrier to connection and collaboration. They are still learning about the world and that’s OK.”
If you DON’T have your child’s consent to look at their phone
If you don’t have your teenager’s consent to look at their phone – and you’ve done so and seen something that is cause for concern – Bird suggests asking yourself two questions.
Firstly, what is the worst thing that will happen if I address this? And secondly, what is the worst thing that will happen if I don’t address this?
“I am sure the answer to the first question involves making a teen angry and having an impact on levels of trust, but the answer to the second question is likely to make your decision to act or not pretty simple,” she added.
“When talking to your teen, take responsibility. Apologise for not being open with them about looking at their phone, but explain your reasons for doing so.”
Dr Hall noted that in this instance, repair becomes especially important.
“Acknowledging the breach of trust, explaining the concern clearly, and working together to renegotiate boundaries helps model accountability and respect,” she said.
“Repairing trust is often more impactful than the original rule-setting, as it teaches young people how relationships recover after mistakes.”
Once you have resolved the matter of concern, talk to your teen about how you will balance privacy and safety moving forward.
Dr Hall concluded: “Ultimately, phone safety is not about constant surveillance. It is about gradually teaching young people how to manage privacy, boundaries and risk online, while maintaining an open, supportive line of communication so they know they can ask for help when they need it.”