LLNL and Meta Co-Develop Future of Materials with Groundbreaking Polymer Chemistry Dataset for Training AI Models | Newswise


Newswise — Polymers are fundamental to our daily lives, serving as the core components for a wide array of goods, including clothing, packaging, transportation infrastructure, construction materials and electronics. Advances in polymer science open pathways for recycling and upcycling waste materials into more valuable chemical feedstocks. They also can have an outsized environmental impact: many widely used polymers are Per- and Polyfluoroalkyl Substances (PFAS), widely recognized as “forever chemicals.”

In a pioneering partnership to accelerate materials discovery with artificial intelligence (AI), researchers from Lawrence Livermore National Laboratory (LLNL) and Meta have created the world’s largest open dataset of atomistic polymer chemistry — a trove of millions of quantum-accurate simulations designed to help AI model the complex behavior of plastics, films, batteries and countless everyday materials.

In a recent paper, the team details Open Polymers 2026 (OPoly26) – a dataset with an unprecedented number and diversity of polymer structures with corresponding simulations performed at quantum accuracy. OPoly26 is a massive reference library that enables AI to learn patterns from millions of pre-computed polymer structures in hours or days, addressing a longstanding gap in polymer data and laying the foundation for safer, faster and more sustainable materials design. The OPoly26 paper formalizes the dataset’s release and demonstrates how the data improves the performance of machine-learned interatomic potentials (MLIPs) on polymer materials.

The work builds on the Meta and Lawrence Berkeley National Laboratory (LBNL)-led      Open Molecules 2025 (OMol25) Dataset, which is making waves with its sweeping collection of open molecular data aimed at advancing AI-driven chemistry. The OPoly26 dataset contains more than 6 million density functional theory (DFT) calculations on polymeric chemical systems, making it nearly ten times larger than the next largest comparable polymer dataset.

LLNL’s partnership with Meta — described by LLNL materials scientist and OPoly26 co-principal investigator (PI) Evan Antoniuk as a “natural fit” — seeks to address this shortfall. By generating critical missing data on polymers with the shared goals of expanding and democratizing open datasets for materials scientists, the team hopes to accelerate the pace of discovery across polymer chemistry.

“This fills a huge gap,” said Antoniuk. “We see this as a community resource, one that we hope becomes the go-to starting point for anyone interested in performing atomistic simulations of polymers.”

LLNL contributed significant computational power and polymer domain knowledge — generating a diverse set of polymer structures and running simulations to help model how these polymers behave in real-world conditions. In turn, Meta contributed vast computational resources to perform 1.2 billion core hours of DFT simulations and train state-of-the-art MLIP models, leveraging the expertise that had already been refined during their earlier molecular effort.

“Meta’s partnership with LLNL demonstrates how open science and AI can accelerate breakthroughs in materials research,” said Rob Sherman, vice president of policy at Meta. “By making this dataset publicly available, we’re giving scientists potent new tools to address critical challenges in healthcare and beyond.”

LLNL is uniquely positioned to generate the OPoly26 dataset at the scale and fidelity required. Researchers tapped into LLNL’s Tuolumne, the world’s 12th fastest supercomputer and companion to the exascale El Capitan, leveraging this hardware with their collective expertise to compress years of simulation work into months and enabling the dataset to reach a scale unmatched in polymer science.

“Comprehensive coverage of this chemical space is essential to the success of the OPoly26 dataset,” said LLNL staff scientist Nick Liesen. “We have worked to leverage pipelines that take us from a simple text string to fully atomistic representations of polymer dynamics at scale.”

Beyond performing all the DFT calculations, researchers at Meta trained and benchmarked machine-learned interatomic potentials at scale, enabling the team to evaluate how well AI models generalize across small-molecule and polymer chemistry. The paper reports substantial improvements in model accuracy when polymer data is incorporated alongside small-molecule training sets, highlighting the importance of training AI on data that reflects real-world complexity.

Understanding why certain polymers, including PFAS-based materials, resist chemical change requires models that can accurately describe both reactive and nonreactive behavior. Capturing this behavior under realistic conditions required careful attention to reactive configurations, according to LBNL chemist and OPoly26 co-PI Sam Blau, who also previously co-led OMol25.

“Reactivity — the breakage and formation of chemical bonds — is central to polymer synthesis, manufacturing, aging and recycling, and to nanoscale patterning of polymer thin films for semiconductor manufacturing,” said Blau. “By going beyond stable structures and explicitly sampling hundreds of thousands of reactive configurations, we aim to accurately describe the reactive events that often govern polymer behavior under real-world conditions.”

Beyond outlining how the dataset was generated and performing standard tests of MLIP performance, the OPoly26 paper also introduces an initial suite of polymer-specific evaluation tasks to benchmark how effectively these models capture simulated polymer phenomena and interactions, such as polymer solvation. Future work will include evaluating the MLIP models against experimental measurements, offering a gauge of how well they can capture real-world polymer properties.

“LLNL’s significant investment in high-performance scientific computing and computational materials science capabilities have been critical to achieving the scale needed to cover many thousands of distinct chemical structures,” said LLNL Materials Science Division Leader Ibo Matthews. “That scale is essential not only for generating the data, but for rigorously evaluating how well AI models perform across the full range of polymer behaviors relevant to real-world applications.”

With a focus on open collaboration, the team is making all data publicly available to fuel polymer advancements across academia, industry and government. The authors also emphasized that OPoly26 is being released under an open license to maximize reuse and reproducibility. Through this open approach, the partnership ensures that the benefits of this public-private investment flow broadly across the entire research community.

The team includes LLNL scientists Brian Van Essen, James Diffenderfer, Helgi Ingolfsson and Supun Mohottalalage, and polymer simulation experts Amitesh Maiti and Matt Kroonblawd from the Lab’s Materials Science Division. Co-authors also included LBNL’s Nitesh Kumar and Lauren Chua. Blau and Kumar’s work was funded by the Center for High Precision Patterning Science (CHiPPS), while Chua was supported by her DOE Computational Sciences Graduate Fellowship. LLNL’s Laboratory Directed Research and Development program funded the LLNL researchers.

This partnership was made possible through a data transfer agreement, facilitated by LLNL’s Innovation and Partnerships Office (IPO). IPO is the Laboratory’s focal point for industry engagement and facilitates partnerships to deliver mission-driven solutions that support national security and grow the U.S. economy. To connect with LLNL on industrial partnerships in Advanced Computing, AI and Quantum technologies, contact IPO Business Development Executive Clarence Cannon.




Capital Power CEO excited about Alberta AI data centre opportunities | Globalnews.ca


The chief executive of Capital Power Corp. is expressing enthusiasm about opportunities to power new data centres in Alberta, as the province prepares to hammer out rules for connecting more projects to the grid without jeopardizing consumer reliability and affordability.

Capital Power CEO excited about Alberta AI data centre opportunities  | Globalnews.ca

“The market environment is increasingly becoming more attractive for Alberta. The pace at which the announcements are coming out may not be at the pace that the market is expecting,” Avik Dey told analysts on a conference call Wednesday to discuss the company’s fourth-quarter results.

“But I think below the surface, the work that’s being done to facilitate new generation coming in… has been in some ways leading North America.

“We continue to be excited about it, and frankly more excited today than I’ve been at any other point in time.”

Data centres are enormous facilities that house the computing firepower needed for artificial intelligence and other applications. Such operations require massive amounts of energy to run and cool the computer servers.

Story continues below advertisement

The Alberta government aims to attract $100 billion in data centre development by the end of this decade, hoping to lure tech behemoths like Meta Platforms Inc.

Some power generators have been looking at opportunities to provide power exclusively to a tech partner, while others have been eyeing options to add more juice to the overall grid.

The province aims to fast track the “bring your own power” proposals through the regulatory process.


Click to play video: 'Surging growth continues in Albertas tech sector'


Surging growth continues in Albertas tech sector


The Alberta Electric System Operator is allowing the connection of up to 1,200 megawatts of large-load projects until 2028 — a small fraction of what had been requested — so as not to compromise reliability.

Get expert insights, Q&A on markets, housing, inflation, and personal finance information delivered to you every Saturday.

Get weekly money news

Get expert insights, Q&A on markets, housing, inflation, and personal finance information delivered to you every Saturday.

That capacity has been snapped up by TransAlta and a joint-venture between Pembina Pipeline Ltd. and Kineticor.

Story continues below advertisement

The grid operator is consulting industry as it develops a long-term plan to enable more data centre investment without overburdening the province’s power system.

Capital Power has said its Genesee Generating Station west of Edmonton would be an ideal spot for a data centre partner to set up shop.


Capital Power’s Genesee plant is seen near Edmonton in an Oct. 19, 2022, handout photo.

Jimmy Jeong/Capital Power via The Canadian Press

“I could not be more emphatic about the fact that we think we’ve got a world-class site that can materially increase generation,” Dey said.

He said its access to land, water and transmission infrastructure makes Genesee “probably one of the most attractive generation sites anywhere in North America” with an ability to expand.

In December, Capital Power announced a memorandum of understanding with New York-based Apollo Funds to form a US$3-billion investment partnership to buy U.S. merchant natural gas power assets.

Story continues below advertisement

Separately, it said it had entered into a binding MOU to negotiate a 250-megawatt electricity supply agreement with an unidentified investment-grade data centre developer in Alberta with an expected 2028 start date.


Click to play video: 'Power grid reliability risks rising as demand outpaces new supply: NERC report'


Power grid reliability risks rising as demand outpaces new supply: NERC report


Earlier Wednesday, Capital Power reported a $13-million net loss for the fourth quarter, compared to net earnings attributable to shareholders of $240 million a year earlier.

The loss amounted to 12 cents per share versus a profit of $1.75 per diluted share during the final three months of 2024.

The Edmonton-based utility says its revenues and other income were $1.08 billion, an increase from $853 million in the prior-year quarter

Adjusted funds from operations rose to $244 million from $182 million year-over-year.


&copy 2026 The Canadian Press


New System Designed to Protect Drones From Cyber Threats | Newswise


Newswise — Adelaide University researchers have initiated the development of a world-first cybersecurity system designed to protect drones from increasingly sophisticated cyber threats.

A new study led by the Industrial AI Research Centre and published in the international journal Computers and Industrial Engineering, paves the way for safer and more resilient unmanned aerial systems (UAS) that are less vulnerable to hacking, signal disruption and malicious software.

Senior author Professor Javaan Chahl says the research addresses a growing but often overlooked problem: modern drones are effectively flying computers that can be attacked.

“Today’s drones are used in warfare, for emergency response, infrastructure inspections, agriculture, environmental monitoring, logistics and even medical deliveries,” Prof Chahl says.

“They collect large amounts of data, process it onboard, and communicate continuously with operators or cloud-based systems. While this makes drones powerful and versatile, it also makes them vulnerable.”

To solve this, the team has developed a new onboard security architecture based on Software-Defined Wide Area Networking, or SD-WAN, which acts as a smart traffic controller for internet connections.

“Instead of relying on a single link, the drone can use multiple communication pathways at once – such as mobile networks, Wi-Fi or other radio links – and automatically switch between them if one fails or is attacked.”

According to first author Tom Scully, PhD candidate and cybersecurity expert, if a drone is hacked, the impact is just not digital.

“A cyber-attack can interfere with flight controls, disrupt communications, expose sensitive data, and even cause a physical accident.”

The researchers say that many drones still rely on basic communication methods that lack encryption – the digital equivalent of sending sensitive information on an open radio channel. This means that attackers could intercept data, inject false commands or overwhelm the drone’s systems.

The system also includes a next-generation firewall, which works like an advanced security gate. It monitors incoming and outgoing data in real time, blocks suspicious activity, and ensures that only authorised communications are allowed.

Importantly, this firewall runs directly on the drone, rather than relying on remote systems.

One of the most innovative aspects of the research is the inclusion of malware sandboxing – a technology normally found in large corporate networks – where suspicious files can be opened and examined without risking damage. If malicious behaviour is detected, the system can block it immediately.

The researchers have successfully demonstrated the software on a drone platform, using real-world onboard computing hardware with cloud-based control systems.

The team plans to conduct future trials to further validate the system in real time, potentially supporting its adoption in commercial, emergency and government drone operations.

“Our goal is simple,” Scully says. “As drones become part of everyday life, we need to ensure they are not only smart and autonomous, but also secure, resilient and trustworthy.”




How AI technology is both powering and polarizing the modern job search | Globalnews.ca


As technology evolves, it can be hard to figure out how to integrate tools such as artificial intelligence int your professional life. When it comes to the job market, more and more young people are using AI to build their cover letters and resumes. For some, the goal is to craft what they hope will be surefire job application. Unfortunately, using a shortcut like AI could also lead to an application’s rejection.

Capital Power CEO excited about Alberta AI data centre opportunities  | Globalnews.ca

For news impacting Canada and around the world, sign up for breaking news alerts delivered directly to you when they happen.

Get breaking National news

For news impacting Canada and around the world, sign up for breaking news alerts delivered directly to you when they happen.

Devan Mescall, a professor at the Edwards School of Business in Saskatoon, says that AI isn’t all inherently bad and that jobseekers can use the tool to help them stand out as an applicant. One handy tool is a new robot friend at the school called Reachy. Reachy helps students prepare for tough interviews by asking questions and analyzing answers.

Meanwhile, for those not wanting to use AI in an ever-changing job market, Sask Jobs offers free employment supports to help people with resumes and cover letters and guide applicants through the job search process.

Story continues below advertisement

Watch the video above to find out more. 


&copy 2026 Global News, a division of Corus Entertainment Inc.


Can you spot an AI generated face? Put your skills to the test with our quiz



Technology is outpacing what the human mind can distinguish between at a rapid pace.

Artificial Intelligence can fabricate images and videos which look completely real to the average person in the blink of an eye.

However, most people think they are good at distinguishing between what’s real and what’s not, so we invite you to put your skills to the test.

Below are six pairs of images. In each case one is real, the other has been created by AI. Test yourself by trying to pick out which is which, answers are at the bottom of this story.

Things to look out for when checking pictures, according to picture editors at The Post, include: Does a person look too ‘polished’ for the scenario around them; Does their face look too symmetrical and perfect; does their clothing have natural wrinkles, fabric textures and signs of wear and are hair strands visible around their head?

John Villasenor, who teaches law and engineering at UCLA, told The Post he suggests looking for “inconsistencies in lighting … and details that don’t actually make sense.”

Extensive testing by the UK’s Royal Society of Open Science showed people with ordinary abilities were able to discern between AI images and real people only 31 percent of the time.

The study also found subjects were cocky, sure they had spotted fakes far more often than they had.

Anatoly Kvitnitsky, CEO and founder of AI or Not, works with corporations to find images which are computer generated. He also says giveaways aren’t always in the face itself.

“For the human eye, you should look for things in the background. AI is really good at creating a believable main subject, but in the background people’s faces can look blurred. In video, you’ll see people standing still.

“If there is a car in the background, look at the license plate. It may not be perfect. The subheading of a sign can be gibberish. AI currently does a quick job on the background,” he told The Post.

However, it may not be that away for long. In the earlier days of AI people could easily spot distorted teeth, glasses or accessories which merged into skin or people’s ears not attaching properly, but the technology quickly moved beyond that. Kvitnitsky says today’s generators even create pores and imperfections.

“There’s an arms race between the creators and the detectors,” added Villasenor. “The creation techniques get better and then the detector techniques try to catch up.”

Kvitnitsky’s company works with clients such as insurance companies to check images are authentic, such as damaged vehicles or scans of ID cards or checks.

The technology he uses analyzes images at the pixel level to see if they were taken with a real camera.

Images created with publicly available programs such as Google Gemini, Adobe Firefly and ChatGPT are the easiest to catch, as they have lines embedded into their code which say which image generator created them and when.

An computer created image of Kvitnitsky made by Google’s Gemini AI. Anatoly Kvitnitsky
A real picture of Anatoly Kvitnitsky, CEO and founder of the company AI or No. Anatoly Kvitnitsky

But if you’re not a computer, the odds are increasingly stacked against us. The UK study, published in Nov. 2025, found even so-called super-recognizers who have a natural knack for facial recognition only had a slight edge, picking up on human faces 54 percent of the time.

The flood of computer-generated images across advertising and social media is also, subconsciously, making people used to seeing AI faces.

When it is misused, the tech can have heavy real-world consequences. In 2024, a finance worker in Hong Kong was lured onto a video conference, apparently with his company’s Chief Finance Officer and other colleagues. After being convinced to transfer $25 million into an out-of-office account, he found out the CFO and other workers were generated by AI. The request was counterfeit but the money sent was legal tender.

Kvitnisky sees the problem as consequential in the long run for society as a whole.

This AI generated photo of Maria Julissa with the soon to be gunned down El Bench made the Internet rounds before being called out as a fake.

“The biggest fear that I have about AI is people doubting what they see and what they hear,” he said.

“We can see something real and then assume it’s fake. That throws fuel on our biases. If we just don’t want to believe something, we can just say [dismiss] it as AI.”

Another real world example appeared over the last week following the killing of drug lord Nemesio “El Mencho” Oseguera Cervantes by Mexican authorities.

One day later the Internet lit up with pictures of a model named Maria Julissa apparently sitting next to him and claims they had been romantically involved.

Julissa denied even knowing or having met El Mencho, but it’s easy to see the risks inherent with people associating you with a cartel narco-terrorist.

As the lines continue to blur, Kvitnisky himself acknowledges that, under the right circumstances, even he could be fooled by something AI generated.

“I have three boys and I am the CEO of an AI detection company, but if [something appeared to have happened to them] I was sent a picture of one of my sons, my emotions would make me forget about all of these things I know,” he admitted. “I would just react the visual cue.”

ANSWERS: 1) B, 2) B, 3) A, 4) B, 5) A, 6) A


AI minister wants more clarity on OpenAI’s changes after Tumbler Ridge | Globalnews.ca


Artificial Intelligence Minister Evan Solomon says he wants more clarity on OpenAI’s committed safety protocol changes after the Tumbler Ridge, B.C., mass shooting, and isn’t ruling out legislative changes to address the issue.

Capital Power CEO excited about Alberta AI data centre opportunities  | Globalnews.ca

The company behind ChatGPT on Thursday said it would enhance its police referral and repeat offender detection practices, after it did not elevate the shooter’s AI chatbot activity to police months before she killed eight people and wounded dozens of others.

In a statement Friday, Solomon said OpenAI’s statement did not include “a detailed plan for how these commitments will be implemented in practice.”

He said he would be meeting with CEO Sam Altman next week to “seek further clarity” and assurances of “concrete action.”

“The tragedy in Tumbler Ridge has raised serious questions about how digital platforms respond when credible warning signs of violence emerge,” the minister said. “Canadians deserve greater clarity about how human review decisions are made, how escalation thresholds are applied, and how privacy considerations are balanced with public safety.

Story continues below advertisement

“We will be seeking further clarity on how human review is conducted and whether Canadian context and best practices are appropriately embedded in those decisions. I will also be consulting with my cabinet colleagues on additional options.”

Solomon added he would also be meeting with other AI companies in the coming weeks “to ensure there is a consistent and clear approach to escalation, local coordination, and youth protection.”

For news impacting Canada and around the world, sign up for breaking news alerts delivered directly to you when they happen.

Get breaking National news

For news impacting Canada and around the world, sign up for breaking news alerts delivered directly to you when they happen.

“Decisions affecting Canadians must reflect Canadian laws, Canadian standards, and Canadian expertise,” he said.

“All options remain on the table as we assess what further steps may be necessary. Public safety must come first.”

Solomon and other federal ministers expressed frustration with OpenAI after the company did not present an action plan during a meeting in Ottawa on Tuesday.

The ministers said they would give OpenAI a chance to come back with one before considering a legislative response to the issue of how AI companies handle and address users’ violent behaviour.


Click to play video: 'OpenAI representatives summoned to Ottawa over Tumbler Ridge shooting'


OpenAI representatives summoned to Ottawa over Tumbler Ridge shooting


Researchers and opposition MPs have urged the federal government to speed up efforts to regulate the AI industry in the wake of the Tumbler Ridge shooting.

Story continues below advertisement

OpenAI acknowledged on Thursday that, if it had detected Jesse VanRootselaar’s ChatGPT activity today, it would have flagged it to law enforcement under its current police referral thresholds, which were updated “several months ago.”

Instead, that activity was only referred to RCMP after the shooting occurred.

It also revealed that it found a second ChatGPT account linked to VanRootselaar after she was identified as the shooter in Tumbler Ridge — despite her first account being shut down last June due to “violent” activity and a system meant to detect repeat violators of OpenAI’s policies.


The company committed to further enhancing both of those protocols, as well as establishing direct points of contact with Canadian authorities and developing better practices of connecting users to local mental health supports if they exhibit troubling behaviour.

B.C. Premier David Eby said Thursday he will also be meeting with Altman, calling OpenAI’s commitments “cold comfort for the people of Tumbler Ridge.”

He told reporters Friday in Vancouver there is no firm date yet for the meeting with the CEO, who has yet to comment publicly on the Tumbler Ridge tragedy or the changes his company says it will make in Canada.

“I want to recognize that OpenAI did come forward,” Eby said. “They did bring the information forward to police. They didn’t try to cover it up after the fact, but this was a colossal, horrific mistake, I guess, is the most generous interpretation I can offer, to fail to bring that information forward to authorities.

Story continues below advertisement

“It’s important that Mr. Altman realizes that, and I will be looking for his support for a national standard across Canada, a national threshold where all AI companies must report — and clear consequences for if they fail to report — incidents where people are planning violence, planning to hurt other people, and using these tools to develop those plans.”

—with files from the Canadian Press

&copy 2026 Global News, a division of Corus Entertainment Inc.


Burger King under fire over ‘dystopian’ new AI technology trial in restaurants


Burger King under fire over ‘dystopian’ new AI technology trial in restaurants
Burger King has come under fire for new tech it’s trialling (Picture: NIKLAS HALLE’N/AFP via Getty Images)

Burger King fans have been left feeling like they’re living in an episode of Black Mirror, after learning about a new technology the chain is currently testing.

Restaurant Brands International, the owner of Burger King, confirmed this week that it’s trialling an OpenAI-powered chatbot inside headsets across 500 restaurants in the US, with a plan to later roll it out nationwide.

The AI chatbot, known as ‘Patty’, can talk to employees through the headsets, and is intended to be a ‘coaching tool’, according to Thibault Roux, Burger King’s chief digital officer in the US and Canada.

Patty will combine data across several aspects of the business, including drive-thru conversations, stock levels, and kitchen equipment. Staff will be able to ask the chatbot questions, such as how to make burgers and for instructions on cleaning equipment like the milkshake machine. 

It is also being trained to ‘measure friendliness’ by recognising certain words such as ‘please’, ‘thank you’ and ‘welcome to Burger King’ and the chain is said to be looking into ‘capturing the tone of conversations’ too, according to The Verge. 

A worker hands food to a customer at the drive-thru window of a Burger King fast food restaurant in Hialeah, Florida, US
The AI chatbot is in the headsets (Picture: Bloomberg via Getty Images)

Other functions include the ability to alert employees to issues, such as a drinks machine being low on Coca-Cola, and flagging issues that customers have reported, such as messy toilets.

And as Patty is Integrated with Burger King’s cloud-based point-of-sale system, it can completely remove a product that’s not available from all digital menus and kiosks within 15 minutes, to avoid customer disappointment. 

Roux claims the technology is something Burger King is ‘tinkering with’ but acknowledges it’s a ‘risky bet’, as it’s not something ‘every guest is ready for’.

And he’s certainly not wrong.

On social media there’s been quite a lot of backlash to the trial already, with Facebook users branding it ‘dystopian’, comparing it to something out of a ‘Black Mirror’ episode, and claiming it’s made them feel as if they are ‘living in hell’.

Burger King fast food restaurant. Burger King is a subsidiary of Restaurant Brands International.
The AI chatbot is being tested at 500 restaurants across the US (Picture: Getty Images)

Following this, Burger King has reiterated that Patty is intended as a ‘coaching tool’ and not a way for the company to ‘track or evaluate staff saying specific words or phrases’.

A spokesperson for the company told The Grocer: ‘BK Assistant is a coaching and operational support tool built to help our restaurant teams manage complexity and stay focused on delivering a great guest experience.

‘It’s not about scoring individuals or enforcing scripts. It’s about reinforcing great hospitality and giving managers helpful, real-time insights so they can recognise their teams more effectively.’

It has not yet been confirmed whether Burger King UK could start utilising this technology. Metro has contacted the fast food chain for further comment. 

This isn’t the first time a fast food chain has tested out AI, with both Taco Bell and McDonald’s previously introducing AI into their drive-thrus in the US.

Neither trial has proved overly successful, though, with McDonald’s removing AI-powered voice ordering from more than 100 locations in July 2024, after several errors were made. This included customers being given multiple unwanted items, and some unusual orders like bacon on ice cream.

Taco Bell first introduced AI to 500 drive-thrus in 2023, but has since reportedly slowed down the US-wide rollout of the technology, after experiencing similar issues.

Customers have complained on social media about mistakes and glitches with the tech, while others have tried to prank it, with one person notably trying to see what would happen if they ordered 18,000 cups of water. 

Spoiler alert: It did not end well. 

Do you have a story to share?

Get in touch by emailing MetroLifestyleTeam@Metro.co.uk.


Minister ‘disappointed’ in OpenAI, but why is AI regulation taking years? | Globalnews.ca


Federal ministers who met with representatives of OpenAI expressed disappointment Wednesday that the company did not present steps it will take to improve its safety measures — including when police are warned of a user’s online behaviour.

Capital Power CEO excited about Alberta AI data centre opportunities  | Globalnews.ca

Experts in the field and opposition MPs, however, are questioning why the federal government has been slow to regulate artificial intelligence before concerns were raised this month following the Tumbler Ridge, B.C., mass shooting.

Artificial Intelligence Minister Evan Solomon said he is giving the company a chance to update him in the coming days on “concrete” actions before he and other ministers address the issue through legislation, though he noted a series of bills addressing AI safety and privacy are in the works.

“Look, we told this company we want to see some hard proposals, some concrete action,” Solomon told reporters in Ottawa while heading into a Liberal caucus meeting.

Story continues below advertisement

“We’re disappointed that by the time they came here, they did not have something more concrete to offer, but we’ll see very shortly what they have,” he added, noting that “all options” were on the table for how the government might act.

Solomon summoned representatives of the company behind ChatGPT to Ottawa after it emerged that the shooter who killed eight people in Tumbler Ridge on Feb. 10 was flagged internally last June for her activity on the AI chatbot.

OpenAI did not alert the RCMP until after the mass shooting occurred, saying the “violent” activity did not meet the internal threshold of an “imminent” threat when the account was flagged and banned over seven months prior.


Click to play video: 'AI concerns following Tumbler Ridge shooting'


AI concerns following Tumbler Ridge shooting


Justice Minister Sean Fraser, Public Safety Minister Gary Anandasangaree and Culture and Identity Minister Marc Miller — whose ministry is working on new online harms legislation — were also present at the meeting.

Story continues below advertisement

Prime Minister Mark Carney told reporters Wednesday he had not yet been briefed on the OpenAI meeting, but suggested he would be open to changes.

“I sat with the families of Tumbler Ridge, met with the first responders, saw the horror that — what happened and the pain that’s been caused,” he said.

“Obviously, anything that anyone could have done to prevent that tragedy or future tragedies must be done. We will fully explore it to the full lengths of the law and we’ll be very transparent about that process.”


Solomon and other ministers who were at the meeting said any action the government takes would focus on the threshold used to escalate concerning behaviour to law enforcement.

Get the day's top news, political, economic, and current affairs headlines, delivered to your inbox once a day.

Get daily National news

Get the day’s top news, political, economic, and current affairs headlines, delivered to your inbox once a day.

“There are issues around the assessment on credibility of a threat and the imminence of a thread that in my view, if properly administered, could prevent tragedies on a go-forward basis,” Fraser said.

“The message that we delivered, in no uncertain terms, was that we have an expectation that there are going to changes implemented, and if they’re not forthcoming very quickly, the government is going be making changes.”

OpenAI told Global News Tuesday evening that the company appreciated the “frank discussion on how to prevent tragedies like this in the future.”

Story continues below advertisement

“Over the past several months, we have taken steps to strengthen our safeguards and made changes to our law enforcement referral protocol for cases involving violent activities, but the ministers underscored that Canadians expect continued concrete action and we heard that message loud and clear,” a spokesperson said.

“We’ve committed to follow up in the coming days with an update on additional steps we’re taking, as we continue to support law enforcement and work with the government on strengthening AI safety for all Canadians.”

OpenAI did not detail exactly what changes have been made in recent months, and did not immediately respond to Global News’ request for comment Wednesday.

Why aren’t any new rules in place?

Researchers who study online harms and AI say the Tumbler Ridge incident shows the AI industry shouldn’t be left to regulate itself, and that the government needs to be more proactive.

Story continues below advertisement

“The ministers ought to be looking at themselves as the ones who are responsible for undertaking regulation seriously when it comes to ChatGPT and other similar tools,” said Jennifer Raso, an assistant professor in law at McGill University.

“Pulling people up to Ottawa after one of the most horrible mass shootings in Canada to have them account for themselves after the harm’s been done seems to be too little, too late.”

Conservative MP Michelle Rempel Garner said she is “very concerned about the government’s capacity and willingness to address artificial intelligence policy writ large” and the pace of progress, noting no meaningful regulations have been enacted since ChatGPT emerged in 2022.

“I certainly don’t see it as a front-burner issue,” she told reporters ahead of question period Wednesday.

“I am calling on the government to take this issue a little more seriously, to be less reactive, and to restate that Conservatives are willing to collaborate with the government on smart policy and certainly discussions on the topic at least.”

NDP interim leader Don Davies told Global News in an interview that the government’s pace has been “glacial.”

“AI isn’t new. Online harms and threats and all sorts of intimidation and disclosure of intimate pictures, this is not new. This has been going on for years and the government has been fully aware of it,” he said.

Story continues below advertisement

“Where they’ve been absolutely, I think, negligent is in acting.”

Efforts to regulate the AI industry and address online harms through legislation died in Parliament last year ahead of the federal election.

The Artificial Intelligence and Data Act would have required AI companies to ensure its platforms are monitored for safety concerns and misuse, while enacting “proactive” measures to prevent real-world harm.

Fraser introduced legislation late last year that would crack down on the sharing on non-consensual sexualized deepfake images generated by AI, following similar bills enacted by provinces like British Columbia.


Click to play video: 'OpenAI summoned to Ottawa over Tumbler Ridge shooting'


OpenAI summoned to Ottawa over Tumbler Ridge shooting


Solomon has promised to unveil a new federal AI strategy in the first quarter of this year, delaying its launch from late 2025.

In a speech last year, he said Ottawa would avoid “over-indexing on warnings and regulation,” reflecting the Carney government’s emphasis on AI’s economic benefits and speedy adoption of the technology.

Story continues below advertisement

A summary of public comments submitted during consultation on the forthcoming strategy showed Canadians are deeply skeptical of AI and want to see government regulation, particularly addressing online harms and mental health concerns.

While allies like the United Kingdom and European Union have moved to strengthen AI regulation, attempts to do so in the U.S. have been sporadic. U.S. President Donald Trump has ordered states not to pass regulations before a national strategy is in place, but that federal standard has yet to emerge.

Canada’s privacy legislation says private companies “may” — not must — disclose personal information to authorities or another organization if they believe there is a risk of significant harm or that a law will be broken.

Any further decision-making is up to the company itself, leading to internal thresholds like OpenAI’s “imminent” threat identification.

Solomon said Wednesday that work is underway to update the Personal Information Protection and Electronic Documents Act, but did not say when it will be tabled or offer further details.

Anandasangaree expressed confidence that the investigation into the shooting will yield answers, including from OpenAI.

“The number of issues arising around Tumbler Ridge concern me,” he told reporters after Wednesday’s caucus meeting.

“Yesterday’s meeting was a critical first step with OpenAI. There’s still a lot of unanswered questions, and there’s certainly a sense of frustration and, frankly, a sense that tech companies overall are not doing enough to address the issues around information that they hold.”

Story continues below advertisement

Solomon emphasized that the government wants to make sure what happened in Tumbler Ridge “does not happen again.”

“Of course a failure occurred here,” he said. “I mean, look what happened.”

—with files from Global’s Touria Izri


OpenAI’s handling of Tumbler Ridge shooter info opens regulation questions | Globalnews.ca


Scrutiny over how OpenAI handled information about the Tumbler Ridge, B.C., mass shooter months before the deadly tragedy provides an opportunity for Canada to consider regulating artificial intelligence companies to inform police in similar scenarios, experts say.

Capital Power CEO excited about Alberta AI data centre opportunities  | Globalnews.ca

The company behind ChatGPT confirmed last week it “proactively” identified and banned an account associated with Jesse Van Rootselaar in June 2025 for misusing the AI chatbot “in furtherance of violent activities.”

However, it did not inform police at that time because the activity did not meet the higher internal threshold of an “imminent” threat.

OpenAI ultimately contacted RCMP after police say 18-year-old Van Rootselaar killed eight people and wounded 25 others on Feb. 10, before taking her own life.

Artificial Intelligence Minister Evan Solomon summoned representatives to Ottawa on Tuesday to discuss the situation and the company’s safety practices.

Story continues below advertisement

Solomon told reporters Tuesday before the meeting that “all options are on the table when it comes to understanding what we can do about AI chatbots.”

Heritage Minister Marc Miller, whose ministry is working with Solomon’s to develop online safety legislation that would cover AI platforms, said the government is taking the time to get that bill right and wouldn’t tie it to what happened in Tumbler Ridge.

“I think there is the need to have legislation to make sure that platforms are behaving responsibly,” he said. “What that looks like is still to be determined, and I can’t discuss timelines with you on that.

“I think in this situation, there is legitimate thirst for easier answers, but I don’t think there are easy answers in this case, particularly with an open investigation. But … we need better answers than the ones we’ve gotten so far.”


Click to play video: 'AI concerns following Tumbler Ridge shooting'


AI concerns following Tumbler Ridge shooting


Canada’s privacy legislation says private companies “may” — not must — disclose personal information to authorities or another organization if they believe there is a risk of significant harm or that a law will be broken.

Story continues below advertisement

Any further decision-making is up to the company itself, leading to internal thresholds like OpenAI’s “imminent” threat identification.

“This is yet another sign that there is a risk with letting OpenAI and other AI developers decide for themselves what is an appropriate safety framework,” said Vincent Paquin, an assistant professor of psychiatry at McGill University who researches the relationship between digital technologies and the mental health of young people.


“Ultimately, ChatGPT is a commercial product. It’s not an approved health-care device. And so it is concerning to see that there are increasing amount of people turning to ChatGPT and other AI products for mental health support and for sensitive discussions about things going on in their lives, without having a clear understanding of the safety of those interactions and the safety mechanisms that are in place.”

For news impacting Canada and around the world, sign up for breaking news alerts delivered directly to you when they happen.

Get breaking National news

For news impacting Canada and around the world, sign up for breaking news alerts delivered directly to you when they happen.

The revelations come as OpenAI and other AI chatbot makers face multiple lawsuits in the U.S. over allegations their platforms helped drive young people to suicide and self-harm.

OpenAI denies those allegations and says that its safety evaluations refuse most, if not all, requests for harmful content like hateful and violent rhetoric and advice, including suicidal ideation.

The Wall Street Journal, which first reported OpenAI’s prior knowledge of Van Rootselaar’s ChatGPT activity, said her posts “described scenarios involving gun violence over the course of several days,” according to people familiar with the matter.

Story continues below advertisement

The report said company employees were alarmed by the posts and wrestled with whether to alert police last summer, before the company opted not to.

Global News has not independently verified the details in the report.

The B.C. government said in a statement Saturday that OpenAI officials met with a government representative on Feb. 11 — the day after the shooting — for “a meeting scheduled weeks in advance” to discuss the possibility of opening OpenAI’s first Canadian office.

“OpenAI did not inform any member of government that they had potential evidence regarding the shootings in Tumbler Ridge,” the government said, but noted OpenAI requested contact information for the RCMP from the province on Feb. 12.


Click to play video: 'OpenAI summoned to Ottawa over Tumbler Ridge shooting'


OpenAI summoned to Ottawa over Tumbler Ridge shooting


Canada’s privacy commissioner, Philippe Dufresne, has previously said not having a Canadian business office to contact makes it more difficult for his agency to investigate tech companies like TikTok.

Story continues below advertisement

Brian McQuinn, an associate professor at the University of Regina and co-director of the Centre for Artificial Intelligence, Data, and Conflict, said the tech industry in general has deprioritized internal safety regulation ever since Elon Musk took over Twitter in 2022, rebranding it as X.

“Basically (after he) fired all the teams doing that kind of work, the other (social media) companies sort of followed suit and realized they could get away with it, too,” he said. “So less staff overhead and fewer headaches being created by your own staff by letting you know things.

“If you don’t know, then you can’t be held responsible.”

Dufresne’s office has launched an investigation into Musk-owned xAI and its Grok chatbot, which is built into the X social media platform, over allegations it facilitated the spread of non-consensual sexualized deepfake images of women and children. Other companies and U.S. states are conducting similar probes.

Musk has criticized the investigations as attempts to stifle free speech and expression.

Sharon Bauer, a privacy lawyer and AI governance strategist based in Toronto, said it’s important for any future legislation or regulation to strike the “fine balance” between individual privacy with the duty to warn of potential threats.

She said the term “imminent” is key.

“That is a really important threshold, because anything lower than that threshold would mean that they would be notifying law enforcement of things that may end up stigmatizing people or creating false positives, which would of course harm those individuals,” she said.

Story continues below advertisement

At the same time, Bauer added, “anything too high would mean missing genuine threats, which may have been the case in this situation.”

“I’m hoping that we’ll get answers about this, if they documented their reasoning about why they didn’t contact law enforcement, and that’s going to be really important to analyze and figure out if they made that right decision,” she said.


Click to play video: 'Fresh questions about Tumbler Ridge tragedy'


Fresh questions about Tumbler Ridge tragedy


McQuinn said he also wants to see data about who has been kicked off AI chatbot and social media platforms for threatening to harm themselves or others, and whether there was any real world follow-up on those individuals.

“If the answer’s no, then they are just putting their heads in the sand,” he said.

“These companies (are worth) trillions of dollars, so the amount of money they spend on anything related to staffing and safety is negligible.”

Story continues below advertisement

He added that Canada’s forthcoming AI strategy needs to pair economic benefits and adoption strategies with robust safety protocols that answer these critical questions.

Paquin cited a recent California law, which requires large AI companies like OpenAI to report to the state any instances of their platforms being used for potentially “catastrophic” activities, as something Canada should model its own potential regulation after.

However, that law defines a catastrophic risk as something that would cause at least $1 billion in damage or more than 50 injuries or deaths.

The law has been praised by some AI companies like Anthropic for balancing public safety with allowing continued “innovation.”

“We should ask for more transparency and we should also think about a way of having an external oversight over those activities, because we cannot let the AI developers be their own judge, the judge of their own safety,” Paquin said.

—with files from Global’s Touria Izri


AI robots may outnumber workers in a few decades as firms ramp up investment


Digital generated image of multiple robots working on laptops siting in a raw.

Andriy Onufriyenko | Moment | Getty Images

AI robots will exceed the working population within a few decades as more firms adopt AI agents and continue to squeeze costs, a former Citi executive warned on Monday.

Rob Garlick, Citi Global Insights’ former head of innovation, technology, and future of work, told CNBC’s “Squawk Box Europe” that as leaders continue to prioritize profitability, their human workers will be left in the dust.

“We have a leadership system in the economic terms and business terms that celebrates profitability,” Garlick said in a conversation with CNBC’s Steve Sedgwick and Ben Boulos.

“When you marry profitability up with the technology progress, we have the biggest trade in history coming, which is basically that artificial intelligence will be able to do more and more, better and better, cheaper and cheaper, and that will be able to substitute for people.”

Garlick, who recently authored “AI – Anarchy or Abundance? Why the Future of Work Needs Pro-Human Leaders,” explained that his previous research at Citi showed that the number of AI robots is going to skyrocket as a result of these business decisions.

“We’re going to go over the next couple of decades to more moving robots than the working population, and then you add on agents, little agents, and it is going to explode,” he added.

AI robots may outnumber workers in a few decades as firms ramp up investment

AI robots ranging from humanoids to domestic cleaning robots and autonomous vehicles are forecasted to increase to 1.3 billion by 2035, according to a 2024 Citi report led by Garlick. The number of AI robots would quickly increase to over 4 billion by 2050, per the insights.

The Citi report even measured how long it would take for a robot to pay for itself through the money saved by replacing a human worker, for example, a $15,000 robot would break even in 3.8 weeks for a $41 an hour human job, or 21.6 weeks for a $7.25 human job. Meanwhile, a robot that costs $35,000 would have a payback time of 8.9 weeks for a $41 an hour human job.

“You can already buy a humanoid today, which gives you a payback period versus human workers of less than 10 weeks,” Garlick told CNBC, citing a figure from his book. “Humans can’t compete on this basis.”

The rise of AI agents

Microsoft’s Work Trend Index report showed that 80% of leaders expect AI agents to be largely integrated into their AI strategy within the next 12 to 18 months. AI agents are a type of software program that can make decisions and complete tasks without much human direction.

Meanwhile, McKinsey & Company’s global managing partner, Bob Sternfels, noted that the company currently employs 20,000 agents alongside 40,000 humans, in an interview with Harvard Business Review. A year prior, the company only had 3,000 agents, and Sternfels predicts that in 18 months from now, there will be an equal number of employees and agents.

“AI agents will get better over time,” says Cresta CEO

Tesla CEO Elon Musk also shared similar views at the World Economic Forum’s flagship conference in Davos last month, saying that AI will likely surpass human intelligence by the end of this year.

“My prediction is, in the benign scenario of the future, that we will actually make so many robots in AI that they will actually saturate all human… there will be such an abundance of goods and services because my prediction is that there’ll be more robots than people,” Musk said.

Fears around AI replacing workers have mounted in the past year as major firms, including Amazon, Salesforce, Accenture, Heineken, and Lufthansa, have cited the technology as part of the reason for eliminating thousands of roles.

Kristalina Georgieva, managing director at the International Monetary Fund, told CNBC in January that AI is “hitting the labor market like a tsunami” and warned that “most countries and most businesses are not prepared for it.”

In the U.S., AI played a role in almost 55,000 layoffs in the U.S. in 2025, according to December data from consulting firm Challenger, Gray & Christmas.

However, some leaders are striking a more positive tone. Nvidia’s CEO Jensen Huang predicts that the “AI boom” will create six-figure salaries for the workers building AI and chip factories. Huang said the technology will boost skilled trade work, such as for plumbers, electricians, construction, and steel workers.