The biggest video generating tool is no more – is the AI bubble bursting?


To view this video please enable JavaScript, and consider upgrading to a web
browser that
supports HTML5
video

When ChatGPT came on the scene in 2022, Silicon Valley-types immediately compared AI to the dawn of the internet in the 1990s.

OpenAI received the same fanfare when it unveiled Sora two years later. By typing a sentence or two into a box on a phone screen, a user could generate a short video that looked straight out of Hollywood.

Disney even signed a three-year $1 billion deal to allow Sora users to forge clips using characters like Mickey Mouse, Cinderella or Yoda.

Yet OpenAI abruptly announced yesterday that it is pulling the plug on its Sora consumer app and internet service. No reason was given.

‘To everyone who created with Sora, shared it, and built community around it: thank you. What you made with Sora mattered, and we know this news is disappointing,’ OpenAI said in a post on X.

OpenAI confirmed to Metro that it would continue to use video-generation technologies to teach skills to robots.

‘As we focus and compute demand grows, the Sora research team continues to focus on world simulation research to advance robotics that will help people solve real-world, physical tasks,’ a spokesperson added.

Disney told Metro it ‘respects OpenAI’s decision to exit the video generation business’ and is keen to license its property to an AI company.

AI enthusiasts and critics alike were taken aback by the overnight end of Sora. Only the day before, OpenAI published a blog post about how to safely create content with its ‘state-of-the-art video generation’ app.

Some, however, weren’t exactly surprised. After all, Sora got into hot water last year when people created videos with copyrighted material.

@metrouk

Are you going to miss it? OpenAI’s hyper-realistic AI video generation model Sora is being shut down. But the announcement was quite abrupt… A spokesperson for Disney said: ‘We appreciate the constructive collaboration between our teams and what we learned from it, and we will continue to engage with AI platforms to find new ways to meet fans where they are while responsibly embracing new technologies that respect IP and the rights of creators.’

♬ original sound – Metro – Metro

But almost all began to wonder the same thing – is Sora’s end a sign that the AI bubble is about to burst, as the Bank of England feared last year?

Metro spoke with nearly a dozen financial analysts, AI experts and stock researchers about whether this will happen.

There were mixed feelings.

Is the AI bubble about to burst?

The biggest video generating tool is no more – is the AI bubble bursting?
AI is Wall Street and the City of London’s hottest trade – but for how long is anyone’s guess (Picture: Metro)

‘Every bubble starts with a story people want to believe,’ says Dat Ngo, of the trading guide, Vetted Prop Firms.

‘In the late 90s, it was the internet. Today, it’s artificial intelligence.

‘The parallels are hard to ignore: skyrocketing stock prices, endless hype and companies investing billions before fully proving their business models.’

In 2000, dot-com whizzes were minting easy millions from internet start-ups. When interest rates were hiked, investors sold off their holdings, companies went bust and people lost their jobs.

Some stock researchers worry that the AI boom could lose steam when the companies spending billions on the tech see profits dip.

Tech giants are spending serious money on the data centres that power AI this year: Amazon is spending $200 billion; Google, $185 billion; Microsoft, $114 billion and Meta, $135 billion. (As a video-generation service, Sora required dar more computing power than most consumer AI products.)

OpenAI shuts down slop video-maker Sora - is the bubble about to burst?
Sora sparked fears that AI could take jobs in Hollywood, like actors and animators (Picture: Sora)

Yet Dr Alessia Paccagnini, an associate professor from the University College Dublin’s Michael Smurfit Graduate Business School, says that shoppers are spending $12 billion. That’s a big difference.

AI-focused stocks are mainly in US markets but as so many investors across the world have bought into it, a fallout would be felt globally.

Dr Paccagnini adds: ‘As a worst-case scenario, if the bubble does burst, the immediate consequences would be severe – a sharp market correction could wipe trillions from stock valuations, hitting retirement accounts and pension funds hard.’

‘In my opinion, we should be worried, but being prepared could help us avoid the worst outcomes.’

Do you think the AI bubble is about to burst?

‘AI hype is overly optimistic’

Despite scepticism, AI feels like it’s everywhere these days, from dog bowls and fridges to toothbrushes and bird feeders.

And it might continue that way for a while, even if not as enthusiastically as before, says Professor Filip Bialy, who specialises in computer science and AI ethics at the Open Institute of Technology.

‘AI hype – an overly optimistic view of the technological and economic potential of the current paradigm of AI – contributes to the growth of the bubble,’ he says.

‘However, the hype may end not with the burst of the bubble but rather with a more mature understanding of the technology.’

Mandatory Credit: Photo by Samuel Boivin/NurPhoto/Shutterstock (15526651af) The Sora app logo appears on the screen of a smartphone placed on a computer keyboard illuminated by purple and blue light. OpenAI's top-ranked video creation and sharing app is controversial for its ability to create fake news and its disregard for copyright in Creteil, France, on October 9, 2025. Illustrations Of OpenAI's Sora App, Cr?teil, France - 09 Oct 2025
Sora added to video to OpenAI’s image generation tools on ChatGPT (Picture: Boivin/NurPhoto/Shutterstock)

Leeron Hoory, a finance journalist at BusinessHeroes, adds that calls for caution are, much like AI technology itself, premature.

She says that the tech industry has a history of spending big to deliver change, as it did with the computer revolution – and that took five years before any sort of reckoning came.

‘But AI isn’t a passing trend like the dot-com rush,’ Hoory says, ‘it’s an infrastructural shift that will underpin everything from logistics to medicine to governance.

‘The market isn’t overheated – it’s still catching up to the scale of what’s coming.’

Get in touch with our news team by emailing us at webnews@metro.co.uk.

For more stories like this, check our news page.




Time for ‘serious’ talks on how AI uses news media information: minister – National | Globalnews.ca


Culture Minister Marc Miller says the government must have a serious conversation about artificial intelligence (AI) systems’ use of news.

Time for ‘serious’ talks on how AI uses news media information: minister – National | Globalnews.ca

“Having the news cannibalized and regurgitated undermines the spirit of the use of that news in the first place and the purpose for which it’s used and we have to have a serious conversation with the platforms that purport to use it including AI shops,” Miller said.

Miller was asked whether the government is open to extending its Online News Act to AI companies. The Online News Act requires Meta and Google to compensate media outlets for displaying their content. Meta pulled news off its platforms in response, but Google has been making payments under the act.

He said it’s not a question about opening up the legislation but of making sure companies are acting responsibly.

Miller was speaking at a national summit of AI and culture, a day after a new report said AI systems depend on Canadian journalism for the information they provide users but don’t offer compensation or proper attribution in return.

Story continues below advertisement

Researchers at McGill University’s Centre for Media, Technology and Democracy tested 2,267 Canadian news stories on ChatGPT, Gemini, Claude and Grok.

They found when the platforms were asked about Canadian news events from their training data, they did not provide source attribution about 82 per cent of the time.

The report said AI companies now extract value from journalism “at every stage: ingesting news archives as training data, producing derivative content without naming the sources, and delivering answers to consumers that could reduce the need and incentive to visit the original source.”

Get breaking Canada news delivered to your inbox as it happens so you won't miss a trending story.

Get breaking National news

Get breaking Canada news delivered to your inbox as it happens so you won’t miss a trending story.

The system “accelerates the economic decline of the journalism it relies on,” the researchers said.


Click to play video: 'AI scams becoming more sophisticated'


AI scams becoming more sophisticated


Miller said Tuesday he had seen the report. He said he wants the government’s legislation to work, and that “this is about people paying their fair share.”

Story continues below advertisement

Asked whether that principle extends to AI companies, Miller said “the principle of proper compensation for use of proprietary material doesn’t change.”

Miller reiterated that the government is open to a deal to bring news back to Meta’s platforms.

The McGill researchers said in a policy brief the problems posed for journalism by social media and AI systems are distinct.


While social media platforms “captured advertising revenue by aggregating attention around news content,” the brief reads, “AI companies are doing something different: they are absorbing the substance of journalism, and delivering it directly to consumers as their own product.”

That means the “consumer’s need to visit the source is not just reduced by algorithmic demotion, as it was with social media. It is rendered unnecessary by the AI’s response itself.”

A coalition of Canadian news outlets, which includes The Canadian Press, Torstar, the Globe and Mail, Postmedia and CBC/Radio-Canada, are suing OpenAI in an Ontario court. They argue OpenAI is using their news content to train ChatGPT, breaching copyright and profiting from the use of that content without permission or compensation.

When he was asked Tuesday about the government’s position on whether the use of copyrighted materials for AI training violates copyright law, Miller said he doesn’t believe there is a need to open up the law.

Story continues below advertisement

“Intellectual property reform is a complex issue that goes over and above artificial intelligence, and it is a multi-year process. So it’d be irresponsible in any context to stand here and say nothing’s going to happen,” he said.

“But the current copyright law does and should protect those that have created material and people need to be compensated properly.”

In a 2024 consultation on copyright and artificial intelligence, AI companies maintained that using the material to train their systems doesn’t violate copyright.

The news publishers’ lawsuit was launched in late 2024. It’s unclear how long it will take for the court to make a decision on the case.

The House of Commons heritage committee heard last year from groups and unions representing creative industries that take issue with AI’s use of copyright-protected works without permission and want to establish a licensing system covering such use.

&copy 2026 The Canadian Press


AI regulation and Canadians’ privacy in wake of Tumbler Ridge shooting | Globalnews.ca


Regulators, cybersecurity and law experts all gathered in Victoria this month to work towards finding the balance between online safety, innovation and protecting Canadians’ privacy.

Time for ‘serious’ talks on how AI uses news media information: minister – National | Globalnews.ca

While dozens of workshops, keynotes and presentations covering a wide variety of topics were made at the Victoria International Privacy and Security Summit, last month’s mass shooting in Tumbler Ridge, B.C., was not far from people’s minds.

“In the ongoing context of discussions about whether platforms should be required to disclose information to prevent tragedies like Tumbler Ridge, it is an absolutely critical and timely topic,” said Canada’s privacy commissioner, Philippe Dufresne, in his keynote address.

“We need to ensure that Canadians are protected from imminent harms but we must — and can do so — in a way that protects Canadians’ privacy and includes appropriate thresholds,” Dufresne continued.

On Monday, 12-year-old Maya Gebala’s family filed a civil suit against tech-giant OpenAI after the company disclosed that the shooter’s ChatGPT account had been disabled in June due to ‘violent activity’ but did not alert law enforcement.

Story continues below advertisement

Gebala’s family said she was shot in the head and neck while trying to lock the library door at her school to protect other students. She remains at the BC Children’s Hospital, where she’s being treated for serious injuries.


Click to play video: 'Tumbler Ridge family sues OpenAI'


Tumbler Ridge family sues OpenAI


In February, OpenAI was summoned to Ottawa to discuss safety concerns and said it would enhance its police referral and repeat offender detection practices. An inquiry set to take place in B.C. will also look at the role artificial intelligence may have played in the shooting.

“Was there manipulation? Was there coercion? Or was it just enough to plant a seed?” Alberta Information and Privacy Commissioner Diane McLeod questioned. “I don’t know, but this is something that we need to be looking at very carefully.”

The lawsuit also claims that OpenAI took no steps to implement age verification or parental consent procedures and accuses the company of knowingly and intentionally permitting ChatGPT to provide pseudo-psychological treatment to the shooter.

Story continues below advertisement

This disturbing trend was addressed by Jim Richberg, Fortinet’s head of security, who also ran cyber operations for the Central Intelligence Agency (CIA) for two decades.

“For those of you who are parents, probably the most disturbing statistic I’ve seen is that more than one in seven teenagers in North America is taking mental health advice on a weekly basis from a GenAI chatbot,” Richberg said at the conference. “Almost none of their parents have any idea that it’s going on.”


Click to play video: 'OpenAI agrees to strengthen safety & security protocols'


OpenAI agrees to strengthen safety & security protocols


How to protect kids from online harm while safeguarding their privacy is an ongoing and complex discussion that is being examined by legal experts throughout Canada.

Get daily Canada news delivered to your inbox so you'll never miss the day's top stories.

Get daily National news

Get daily Canada news delivered to your inbox so you’ll never miss the day’s top stories.

“The basic problem has been the design of these spaces and the algorithms that are pushing content to children that is causing deep harm,” said Emily Laidlaw, an associate professor of law at the University of Calgary and the Canada Research Chair in cybersecurity law.

Story continues below advertisement

“The gore videos, eating disorder content and violence, all of which is creating this kind of space that’s manipulating children’s thoughts and there’s no ability for parents to necessarily know that it’s happening,” Laidlaw explained.

Laidlaw said in the wake of the Tumbler Ridge shooting, previously proposed legislation in Canada may now be outdated and will need to be modernized.

“I think that we will likely see that the Online Harms Act will be scrutinized in a different way now,” Laidlaw said. “These types of AI-facilitated harms really didn’t exist in the same way and that shows you how fast this space evolves.”

“I think that when it comes to the online harm space and the privacy space, we have to start thinking about legislation as being almost modular,” Laidlaw explained. “Like building blocks where you can think through inserting new types of technologies, new types of harms, to be able to respond. It needs to be iterative to stay current.”

On Thursday, the federal government tabled a new version of its “lawful access” legislation that would give police new powers to pursue online data for investigative purposes while addressing some of the privacy concerns raised by the original version of the bill.

The new bill would also allow Canadian police to seek authority, through a court, to request transmission data or subscriber information from a foreign company like Google, Meta or OpenAI. However, it does not address calls to require AI companies to report troubling online behaviour to police.

Story continues below advertisement

“I want to be clear what C-22 is not. It is not about surveillance of Canadians going on about their daily lives,” said Public Safety Minister Gary Anandasangaree. “It is about keeping Canadians safe in the online space.”

 


Click to play video: 'Canada introducing new version of ‘lawful access’ bill to give CSIS, police more online powers'


Canada introducing new version of ‘lawful access’ bill to give CSIS, police more online powers


While Ottawa has signalled new AI regulations are also on the way, some experts said it’s long overdue.

“We’ve been thinking about the next round of legislation for a long time now, and it just doesn’t come,” said Teresa Scassa, the Canada Research Chair in Information Law and Policy at the University of Ottawa.

“What we need to be thinking about in terms of reform, is reform. We need it, and we need it now,” she said.

Scassa said while privacy commissioners across Canada are working diligently to address new issues sparked by new technology, including AI, they need more tools to enforce their recommendations and investigative findings.

Story continues below advertisement

“For example, the Federal Privacy Commissioner can issue findings, but [the Commissioner] has no order-making powers,” she said. “[He] can’t impose fines or penalties on organizations that refuse to comply with recommendations.”

“There’s a possibility of going to the federal court,” Scassa explained. “But again, it just makes it slower and more cumbersome, so it’s just having those additional tools that would make a very significant difference.”

The Office of the Privacy Commissioner of Canada (OPC) has taken on several high-profile investigations recently, which include companies such as TikTok, Aylo, which runs Pornhub and YouPorn, and an ongoing investigation into X Corp., which operates the social media platform X and the chatbot Grok.

Dufresne agreed that while some companies have made improvements, more tools would be welcomed.

“We stand really in stark contrast with colleagues from all over the world, and having this power would be important,” he said. “It’s not that those fines should be imposed frequently, but having the possibility of the fine is going to make sure that companies come to the table and that they’re prepared to protect privacy, right at the beginning.”

Story continues below advertisement


Click to play video: 'TikTok allowed to keep operating in Canada after security review'


TikTok allowed to keep operating in Canada after security review


The OPC is also in the midst of developing a Children’s Privacy Code that addresses the handling of children’s personal information to ensure it’s protected and that children are able to exercise their privacy rights.

“I’m using the powers that I have, and that means developing a code that’s going to set out my expectations for platforms and that’s going to include a reflection on age assurance and age verification,” Dufresne said. “How do we make sure that we prevent access to certain websites for kids?”

Experts said one ongoing challenge that continues to arise when it comes to age verification is the amount of information that would need to be collected by the platform to prove a child’s age.

“When you start to build these safeguards in, you’re talking in part about personal data and collecting more personal data and of course, now we can use biometric data, which is much more sensitive,” Scassa said.

Story continues below advertisement

“You’ve also got the questions about how our data is being used to control our access to technologies, and in controlling our access to technologies, how it can also be used to monitor our use of those technologies.”

Alberta’s privacy commissioner said it was also a key issue during the joint investigation into TikTok, which included the OPC along with provincial counterparts in Alberta, B.C. and Quebec.

“There were hundreds of thousands of Canadian children on the site that were under the age of 13,” McLeod said. “One of the recommendations we made is [TikTok] had to implement proper age verification, but then the question is: how much information do you need to collect to verify age?

“What’s the appropriate mechanism within these social media platforms?” McLeod questioned. “You know, this is what keeps me up at night.”

Many of these issues are expected to be front and centre when Evan Solomon, the federal minister of Artificial Intelligence and Digital Innovation, makes stops in Alberta next week, including his first official visit to Calgary on Wednesday.

None of the allegations in the lawsuit against OpenAI have been proven in court.

Story continues below advertisement

With files from Amy Judd, Sean Boynton and Catherine Urquhart, Global News


OpenAI agrees to strengthen safeguards following B.C. mass shooting: minister – BC | Globalnews.ca


Federal Artificial Intelligence Minister Evan Solomon says the CEO of OpenAI has agreed to take several actions to bolster safety, including providing a report outlining the new systems the firm is developing to identify high-risk offenders and policy violators.

Time for ‘serious’ talks on how AI uses news media information: minister – National | Globalnews.ca

A statement from Solomon following his meeting Wednesday with Sam Altman says the minister will also ask the Canadian AI Safety Institute to examine the company’s model and provide expert technical advice to his office.

The meeting follows the revelation that OpenAI banned the mass shooter in Tumbler Ridge, B.C., from using its ChatGPT chatbot last June due to worrisome interactions but did not alert law enforcement before the killings last month.


Click to play video: 'South Peace MLA calls for full Tumbler Ridge inquiry'


South Peace MLA calls for full Tumbler Ridge inquiry


OpenAI has said new protocols would have resulted in Jesse Van Rootselaar’s interactions being flagged to police, but Solomon says the tragedy “demands answers and stronger safeguards when powerful AI technologies are involved.”

Story continues below advertisement

Solomon says the actions Altman has agreed to take include establishing a direct point of contact with RCMP and implementing safety protocols that direct people “experiencing distress” to appropriate local services.

For news impacting Canada and around the world, sign up for breaking news alerts delivered directly to you when they happen.

Get breaking National news

For news impacting Canada and around the world, sign up for breaking news alerts delivered directly to you when they happen.

The minister says Altman also confirmed the company would apply its new safety standards retroactively and review previously flagged cases.


Click to play video: 'AI minister ‘disappointed’ with OpenAI meeting on Tumbler Ridge shooter'


AI minister ‘disappointed’ with OpenAI meeting on Tumbler Ridge shooter


“This will determine whether additional incidents that would have been referred to law enforcement under OpenAI’s new safety standards were missed, and ensure they are promptly reported to the RCMP,” Solomon’s statement says.

It says the company has also committed to assessing how they would include Canadian privacy, mental health and law enforcement experts into the process to identify and review high-risk cases involving Canadian users of OpenAI technology.

Van Rootselaar fatally shot eight people in Tumbler Ridge on Feb. 10, including six children, before killing herself.

Story continues below advertisement

B.C. Attorney General Niki Sharma said Eby would meet Altman to find out whether the company could have prevented the shootings.


Click to play video: 'Inquest to be held into Tumbler Ridge school shooting'


Inquest to be held into Tumbler Ridge school shooting


Sharma said there is a larger question for Ottawa when it comes to regulating and overseeing platforms like OpenAI.

The Altman meetings come after B.C.’s chief coroner, Dr. Jatinder Baidwan, on Tuesday announced an inquest into the shootings that will consider the role of artificial intelligence.

Sharma said she hopes OpenAI will participate in the inquest and share whatever it knows.

This report by The Canadian Press was first published March 4, 2026.


&copy 2026 The Canadian Press


OpenAI reps summoned to Ottawa to discuss concerns following Tumbler Ridge shooting | Globalnews.ca


Artificial Intelligence Minister Evan Solomon says he summoned representatives from OpenAI to Ottawa to discuss safety concerns following revelations about interactions the Tumbler Ridge, B.C., shooter had with ChatGPT.

Time for ‘serious’ talks on how AI uses news media information: minister – National | Globalnews.ca

ChatGPT states that the account was suspended due to concerns about the suspect’s posts, but it did not alert law enforcement officials in Canada because it was not deemed an immediate threat.

“The horrifying tragedy in Tumbler Ridge has left families with unthinkable losses and shaken communities across Canada,” Solomon said in a statement on Saturday.

“Like many Canadians, I am deeply disturbed by reports that concerning online activity from the suspect was not reported to law enforcement in a timely matter.”


Click to play video: 'Fresh questions about Tumbler Ridge tragedy'


Fresh questions about Tumbler Ridge tragedy


Solomon said Canadians expect online platforms, including OpenAI, to have “robust safety protocols and escalation practices” to help protect public safety.

Story continues below advertisement

On Friday, ChatGPT confirmed that an account connected with the Tumbler Ridge shooter, Jesse VanRootelsar, was identified in June 2025 for “abuse and detection and enforcement efforts.”

Get the day's top news, political, economic, and current affairs headlines, delivered to your inbox once a day.

Get daily National news

Get the day’s top news, political, economic, and current affairs headlines, delivered to your inbox once a day.

VanRootelsar shot and killed eight people on Feb. 10 — her mother and half-brother at her home and then five students and an educator at Tumbler Ridge Secondary School. VanRootelsar was then found dead of what appeared to be a self-inflicted gunshot wound inside the school, RCMP later confirmed.

“Our thoughts are with everyone affected by the Tumbler Ridge tragedy,” a spokesperson for OpenAI, which owns ChatGPT, confirmed on Friday afternoon, adding that after the incident on Feb. 10, the company contacted the RCMP.


“We proactively reached out to the Royal Canadian Mounted Police with information on the individual and their use of ChatGPT, and we’ll continue to support their investigation.”

Solomon said on Monday he is deeply disturbed by the reports of what happened with ChatGPT and VanRootelsar’s account and that he contacted the company over the weekend to get more information and to arrange a meeting in Ottawa on Tuesday.

He says he expects the company’s top safety representatives to explain its protocols and how it decides to forward cases to law enforcement.

— With files from Global News’ Prisha Dev and The Canadian Press

&copy 2026 Global News, a division of Corus Entertainment Inc.