One in five boys know someone their age who is in a relationship with an AI chatbot, according to a new survey.
Male Allies UK caught up with over 1,000 boys aged 12-16 years old to dive into their behaviour and attitudes when it comes to engaging with AI chatbots.
The vast majority, eight in 10 boys (85%) have had a conversation with a chatbot, with 43% of boys saying they are talking to bots so they can ask questions that they have without feeling embarrassed.
Over a quarter (26%) said they like the attention and connection over real-life connections.
Robot romance is also on the rise, with over half of boys (58%) saying that AI relationships are easier because you can control the conversation.
Over one third (36%) of boys admitted they prefer speaking to AI chatbots over family and friends.
Lee Chambers, founder of Male Allies UK, said: “As parents we didn’t grow up with chatbots, and so we’re left in the dark on whether they are harmless or dangerous.
“What we do know is that spending time online can feel sociable but can actually be incredibly isolating. The main problem with developing a relationship with an AI chatbot is that it means that you are spending that time speaking to technology instead of building real-life connections.”
Concerns over AI chatbot relationships
Chambers noted that chatbots are, by default, submissive, and reassure and reaffirm people’s thoughts because “they want you to like them”.
“On top of this you can create your perfect ‘person’, moulding not only how they look but how they respond to you, how they treat you, and you can start and stop the relationship on a whim. This isn’t real life – and these instant gratification behaviours seeping into real life will have consequences.”
AI bots aren’t just being used as companions, either. Chambers noted they are enabling behaviour in boys that can cause irreparable damage with the rise of nudification apps.
Almost one in 10 (9% of) boys aged 12-16 years old have used AI to create sexual images of their friends, with 5% admitting to using AI to create sexual images of family members, according to Male Allies research.
Just under half (47%) of boys in this age bracket know of sexual AI images/videos being created whilst at school.
Why boys say they are spending more time online
New data from the Boys In Schools report from Male Allies explored reasons as to why boys might be spending more time online – and turning to AI chatbots for company.
Most (81% of) boys say they don’t think there are enough physical spaces for them.
Chambers suggested boys need “real-life connection and conversation” and “to know that they are supported and that they can speak up about what they are doing online without being judged”.
“We can’t just remove every new trend online, instead we need to bridge the gap between boys who are growing up with social media and AI and parents who are worried about the unknown,” he said.
The City of Calgary has reached a major milestone in the work to repair the beleaguered Bearspaw South Feeder Main as crews were scheduled to begin the process of slowly refilling the pipe with water on Friday — a task that will take several days to complete.
The water will then need to be tested to confirm it is safe for consumption, then the feeder main will be reconnected to the rest of Calgary’s water system.
The excavations along the nine sections of pipe where the repairs are being done have now been backfilled and the roads that had to be torn up to allow the work to proceed will soon be repaved.
This image, from the City of Calgary, shows some of the repair work being done to repair nine damaged sections of the Bearspaw South Feeder Main.
Source: City of Calgary
However, the city does not yet have a date for when the water restrictions will be lifted.
Story continues below advertisement
“We’re getting close, but we’re not out of the woods yet,” said the city’s general manager of infrastructure services, Michael Thompson. “Over the next few days, we will be moving ahead in a measured, deliberate way, with a focus on stability and safety as we work to start flowing water through the pipe.”
Mayor Jeromy Farkas told Global News in an interview on Friday: “We’re just a couple more days until we can end the water restrictions, but this allows us to reinforce those sections that we knew were on the imminent, imminently going to fail.”
Get weekly health news
Receive the latest medical news and health information delivered to you every Sunday.
The latest information on water use from the city shows that on Thursday, Calgarians used 483 million litres of water — that’s below the 500 million litres of daily water use that the city claims is sustainable while the feeder main is shut down and the Glenmore Reservoir is being used to supply most of the city’s water.
On Wednesday, Calgarians used 501 million litres.
Calgary under month-long water restrictions as Bearspaw feeder main work begins
“We know next week is spring break for a lot of households,” Thompson said. “We ask everyone to continue with your water saving, especially as your household routines might change next week.”
Story continues below advertisement
Thompson said it will also take about five million litres of water to refill the recently repaired sections of the pipe, so overall water use is expected to increase over the weekend.
While Calgarians were able to keep their daily water use on Thursday below the 500 million litres the city claims is sustainable, water consumption is expected to increase over the next few days while the Bearspaw South Feeder Main is being refilled.
X/JeromyYYC
Even with the repairs that are being done, Farkas continues to warn that the pipe is terminally ill and could still break at any time, which would result in another shutdown and more water restrictions.
While the city said mitigation work has been done to protect homes and businesses, the city has also issued a warning about the possibility of pooling water in the communities of Bowness and Montgomery should there be another failure.
“There are a couple areas through those communities where if the pipe were to fail, it would cause flooding. Think like the videos that folks saw on December 30th when Trans-Canada Highway became that surging river. So we don’t want to be in that situation. So we’ve done some preventative work in the area. You’ll see adjustments to the pathway, the berms, but we want to do this as safely as possible,” Farkas said.
Story continues below advertisement
The city has also produced maps of the area showing where water could pool if there is another failure.
The city will also be hosting an online information session on Monday at noon to provide an update on the feeder main repairs for people who live in the area.
Farkas claims the city is also on track to complete the job of replacing the old concrete feeder main with a new steel pipe by sometime in December.
Calgary mayor talks about latest water restrictions, public hearings on rezoning bylaw
A lawyer involved in a B.C. lawsuit against social media giant Meta says a decision in the U.S. could affect the court case in this province.
On Wednesday, a jury declared that Meta and YouTube must pay millions in damages to a 20-year-old woman after they found that the companies designed their platforms to hook young people without concern for their well-being, according to the Associated Press.
The woman, identified as KGM, started using YouTube at six years old and Instagram at nine, and testified that her addiction led to anxiety, depression and self-harm.
The jury awarded her $6 million in damages.
“There are so many families who have been tragically hurt through the addiction of social media,” Mark Lanier, KGM’s lawyer, said.
Meta, YouTube found liable in social media addiction trial
It is the second blow to Meta this week after a jury in New Mexico ordered the social media giant to pay $375 million for violating child safety laws and hiding what it knew about the dangers of sexual exploitation on its apps.
Story continues below advertisement
In both cases, Meta and Google are exploring legal options, including appeals.
Get daily National news
Get daily Canada news delivered to your inbox so you’ll never miss the day’s top stories.
Lawyers in B.C. are also suing Meta in a proposed class action civil suit.
“The decision in New Mexico relates to conduct we say is illegal in Canada,” Reidar Mogerman, class action co-lead counsel, said.
“The finding that it is a broad, system-wide impact that’s causing harm to children is exactly the kind of thing we’re going to be litigating in Canada, and we see it as a really beneficial step and guide for us as we move our case forward.”
The Canadian lawsuit against Meta could include thousands of children across the country, but at this point, the precise number is unknown because the case has yet to be certified as a class action.
Meta denies the allegations, none of which have been proven in court. A hearing to determine whether a class action will proceed is expected next year.
When ChatGPT came on the scene in 2022, Silicon Valley-types immediately compared AI to the dawn of the internet in the 1990s.
OpenAI received the same fanfare when it unveiled Sora two years later. By typing a sentence or two into a box on a phone screen, a user could generate a short video that looked straight out of Hollywood.
Disney even signed a three-year $1 billion deal to allow Sora users to forge clips using characters like Mickey Mouse, Cinderella or Yoda.
Yet OpenAI abruptly announced yesterday that it is pulling the plug on its Sora consumer app and internet service. No reason was given.
‘To everyone who created with Sora, shared it, and built community around it: thank you. What you made with Sora mattered, and we know this news is disappointing,’ OpenAI said in a post on X.
We’re saying goodbye to the Sora app. To everyone who created with Sora, shared it, and built community around it: thank you. What you made with Sora mattered, and we know this news is disappointing.We’ll share more soon, including timelines for the app and API and details on…
OpenAI confirmed to Metro that it would continue to use video-generation technologies to teach skills to robots.
‘As we focus and compute demand grows, the Sora research team continues to focus on world simulation research to advance robotics that will help people solve real-world, physical tasks,’ a spokesperson added.
Disney told Metro it ‘respects OpenAI’s decision to exit the video generation business’ and is keen to license its property to an AI company.
AI enthusiasts and critics alike were taken aback by the overnight end of Sora. Only the day before, OpenAI published a blog post about how to safely create content with its ‘state-of-the-art video generation’ app.
Some, however, weren’t exactly surprised. After all, Sora got into hot water last year when people created videos with copyrighted material.
Are you going to miss it? OpenAI’s hyper-realistic AI video generation model Sora is being shut down. But the announcement was quite abrupt… A spokesperson for Disney said: ‘We appreciate the constructive collaboration between our teams and what we learned from it, and we will continue to engage with AI platforms to find new ways to meet fans where they are while responsibly embracing new technologies that respect IP and the rights of creators.’
But almost all began to wonder the same thing – is Sora’s end a sign that the AI bubble is about to burst, as the Bank of England feared last year?
Metro spoke with nearly a dozen financial analysts, AI experts and stock researchers about whether this will happen.
There were mixed feelings.
Is the AI bubble about to burst?
AI is Wall Street and the City of London’s hottest trade – but for how long is anyone’s guess (Picture: Metro)
‘Every bubble starts with a story people want to believe,’ says Dat Ngo, of the trading guide, Vetted Prop Firms.
‘In the late 90s, it was the internet. Today, it’s artificial intelligence.
‘The parallels are hard to ignore: skyrocketing stock prices, endless hype and companies investing billions before fully proving their business models.’
In 2000, dot-com whizzes were minting easy millions from internet start-ups. When interest rates were hiked, investors sold off their holdings, companies went bust and people lost their jobs.
Some stock researchers worry that the AI boom could lose steam when the companies spending billions on the tech see profits dip.
Tech giants are spending serious money on the data centres that power AI this year: Amazon is spending $200 billion; Google, $185 billion; Microsoft, $114 billion and Meta, $135 billion. (As a video-generation service, Sora required dar more computing power than most consumer AI products.)
Sora sparked fears that AI could take jobs in Hollywood, like actors and animators (Picture: Sora)
Yet Dr Alessia Paccagnini, an associate professor from the University College Dublin’s Michael Smurfit Graduate Business School, says that shoppers are spending $12 billion. That’s a big difference.
AI-focused stocks are mainly in US markets but as so many investors across the world have bought into it, a fallout would be felt globally.
Dr Paccagnini adds: ‘As a worst-case scenario, if the bubble does burst, the immediate consequences would be severe – a sharp market correction could wipe trillions from stock valuations, hitting retirement accounts and pension funds hard.’
‘In my opinion, we should be worried, but being prepared could help us avoid the worst outcomes.’
Do you think the AI bubble is about to burst?
Stay up to date with the stories everybody’s talking about by signing up to Metro’s News Updates newsletter.
‘AI hype is overly optimistic’
Despite scepticism, AI feels like it’s everywhere these days, from dog bowls and fridges to toothbrushes and bird feeders.
And it might continue that way for a while, even if not as enthusiastically as before, says Professor Filip Bialy, who specialises in computer science and AI ethics at the Open Institute of Technology.
‘AI hype – an overly optimistic view of the technological and economic potential of the current paradigm of AI – contributes to the growth of the bubble,’ he says.
‘However, the hype may end not with the burst of the bubble but rather with a more mature understanding of the technology.’
Sora added to video to OpenAI’s image generation tools on ChatGPT (Picture: Boivin/NurPhoto/Shutterstock)
Leeron Hoory, a finance journalist at BusinessHeroes, adds that calls for caution are, much like AI technology itself, premature.
She says that the tech industry has a history of spending big to deliver change, as it did with the computer revolution – and that took five years before any sort of reckoning came.
‘But AI isn’t a passing trend like the dot-com rush,’ Hoory says, ‘it’s an infrastructural shift that will underpin everything from logistics to medicine to governance.
‘The market isn’t overheated – it’s still catching up to the scale of what’s coming.’
Get in touch with our news team by emailing us at webnews@metro.co.uk.
For more stories like this, check our news page.
MORE: This new tool can casually control your computer – just like a human
MORE: Mark Zuckerberg is creating an AI CEO to help him do his job
MORE: Square Enix teams up with Google to add AI chatbot to game
A petition opposing the construction of Bell’s AI Fabric Data Centre has been circulating over the past month, steadily gaining traction.
The petition was created by 14-year-old concerned Regina resident, Aya Merroche, and as of Monday had collected nearly 11,000 signatures.
Get breaking National news
Get breaking Canada news delivered to your inbox as it happens so you won’t miss a trending story.
Many of the signatories left comments detailing concerns about noise, water use, environmental impacts, power consumption and the ethics of artificial intelligence.
Global News spoke with Dr. Simon Enoch, who has been studying the growing community backlash against Data Centres across North America, to find out what residents should be worried about.
This series of images from NASA’s Hubble Space Telescope of the fragmenting comet C/2025 K1 (ATLAS) was taken over the course of three consecutive days – November. 8, 9, and 10 last year (Picture: NASA/Cover Media)
NASA astronomers struck it lucky after the Hubble Space Telescope observed a comet in the act of disintegrating completely by chance.
The event was one that scientists believed they were unlikely to witness in real time.
And it was even more extraordinary as researchers had intended to observe a different comet, but were forced to change plans due to technical constraints.
The findings were published on Wednesday in the journal Icarus.
‘Sometimes the best science happens by accident,’ John Noonan, a research professor in the Department of Physics at Auburn University in Alabama, said.
‘This comet got observed because our original comet was not viewable due to some new technical constraints after we won our proposal. We had to find a new target – and right when we observed it, it happened to break apart, which is the slimmest of slim chances.”
The object, known as Comet C/2025 K1 (ATLAS), can be seen progressively breaking apart in a sequence of images taken between November 8 and 10 last year.
Initially appearing as four bright objects, the largest fragment then splits further, with pieces drifting away from one another.
This diagram shows the path the comet took as it swung past the Sun and began its journey out of the solar system (Picture: NASA/Cover Media)
Noonan, a co-investigator on the study, said he did not realise the significance immediately.
‘While I was taking an initial look at the data, I saw that there were four comets in those images when we only proposed to look at one,’ he said. ‘So we knew this was something really, really special.’
Scientists have long attempted to capture such an event using Hubble, but the unpredictability of comet break-ups has made this difficult.
‘The irony is now we’re just studying a regular comet and it crumbles in front of our eyes,’ said principal investigator Dennis Bodewits, also of Auburn University.
‘Comets are leftovers of the era of solar system formation, so they’re made of “old stuff”—the primordial materials that made our solar system.
‘But they are not pristine – they’ve been heated; they’ve been irradiated by the Sun and by cosmic rays.
‘So, when looking at a comet’s composition, the question we always have is, “Is this a primitive property or is this due to evolution?’”
‘By cracking open a comet, you can see the ancient material that has not been processed.’
Hubble observed the comet splitting into at least four pieces, each surrounded by a glowing cloud of gas and dust known as a coma. While ground-based telescopes saw only faint bright patches, Hubble’s high resolution allowed scientists to distinguish individual fragments clearly.
The observations were made shortly after the comet passed its closest point to the Sun – known as perihelion – when heating and stress are at their greatest. Scientists believe the comet began breaking up about eight days before Hubble captured the images.
However, the team has identified a puzzling delay between the break-up and the brightening detected from Earth.
A series of images from NASA’s Hubble Space Telescope of the fragmenting comet (Picture: NASA/Cover Media)
One theory is that a layer of dust must first form over newly exposed ice before being blown away. Another possibility is that heat builds up beneath the surface before ejecting material into space.
‘Never before has Hubble caught a fragmenting comet this close to when it actually fell apart. Most of the time, it’s a few weeks to a month later. And in this case, we were able to see it just days after,’ said Noonan.
‘This is telling us something very important about the physics of what’s happening at the comet’s surface. We may be seeing the timescale it takes to form a substantial dust layer that can then be ejected by the gas.’
Early observations suggest the comet is chemically unusual, with significantly lower levels of carbon than typically seen. Further analysis using Hubble’s instruments is expected to reveal more about its composition and, potentially, the origins of the solar system.
Now reduced to a cluster of fragments about 250 million miles from Earth, the comet is travelling through the constellation Pisces and is expected to leave the solar system permanently.
MORE: Daily horoscope March 20, 2026: Today’s predictions for your star sign
MORE: Daily horoscope March 19, 2026: Today’s predictions for your star sign
MORE: Daily horoscope March 18, 2026: Today’s predictions for your star sign
It was a packed house at Platform Calgary on Wednesday as Canada’s minister for AI and digital innovation, Evan Solomon, explained more on how the government is looking to support the growing AI industry.
“We are on a mission for ‘team yes’ to find answers,” explained Solomon. “My job is to facilitate ‘team yes.’ To get out of the way when we need to, and to give a boost when we have to.”
According to Solomon, Canada is at a critical juncture where we are living through a period of political and technological change happening at an exponential rate.
Minister Evan Solomon says his government is here to support Canada’s AI industry.
Global News
“This political and this technological realignment poses real challenges to our sovereignty, to our values, to our communities… but it also presents opportunities,” Solomon noted.
Story continues below advertisement
Minister Solomon noted that there seem to be two distinct sides of the AI coin. Those with pompoms, who believe AI will solve all the world’s problems, and others with pitchforks, who say AI will take away jobs, harm the environment and our future.
“We’ve got to be open to the opportunities here and not stifle the innovation, and make sure that we’re candid about their concerns,” said Solomon. “Privacy, data, jobs, and we will protect those things as well.”
Those attending Wendesdays event listening intently to potential regulations for AI.
Global News
Alberta-based AI firms are responding positively to the idea that the fed’s are willing to fight to keep Canadian companies in the country.
Get daily National news
Get daily Canada news delivered to your inbox so you’ll never miss the day’s top stories.
“That spirit of collaboration and ecosystem growth, built on an actual federal level? I think that’s absolutely key,” affirmed Ferdinand Hingerl, chief technology officer with Ambyint.
Story continues below advertisement
“With such a strong neighbour in the south that we always have to deal with (brain drain), the question is how can we address that challenge so that all the money we invest in our people here stays in Canada.”
There are three key pillars to the federal government’s plan: ensuring access to capital, computing, and consumers.
“Most companies would rather have a contract than a grant, and the federal government can play a big role in that,” said Shannon Vander Meulen, co-founder of WaitWell. “There’s a bit of a double-edged sword with that because obviously a lot of companies like mine sell extensively into the U.S.”
Currently, Canada only has a voluntary code of conduct for the development and management of advanced generative AI systems. Solomon tells Global News that he and other ministers are working on introducing new legislation to provide more concrete framework to protect Canadians and their data.
“The justice minister has tabled legislation on the non-consensual sharing of sexual and synthetic deep-fake imagery, to criminalize that,” Solomon shared.
“I will be tabling legislation to update our privacy, to protect our consumers, to protect our children, and make sure our children’s information is safe… And then Marc Miller is going to have the online harms element.”
Mount Royal University information design associate professor Lauren Dwyer says regulating AI in Canada is critical for protecting Canadians.
Global News
At Mount Royal University, information design associate professor Lauren Dwyer says sorting out a mandatory framework to protect Canadians is hugely important.
Story continues below advertisement
“We are driving next to a cliff with potential huge consequences if we aren’t managing this properly,” noted Dwyer. “And we’ve seen some of the most deadly versions of this when we look at what happened in Tumbler Ridge.”
Dwyer’s research focuses on a number of different areas within the sphere of artificial intelligence, including how the design of AI shapes communication, our behaviour, and what people do about it.
To her, if we want to remove the human element, there needs to be a greater focus on accuracy.
“When we’re using this tool to make things more efficient, we’re also removing the possibility of a person at every single step,” Dwyer said. “We love to talk about artificial intelligence with this ‘human in the loop,’ someone supervising the decisions being made, and that’s fantastic if efficiency isn’t the goal. If you’re supervising all these decisions but you’re being urged to move faster, and chances are you’re only taking a quick glance.”
Dwyer notes that, traditionally, in order for a new technology to be adopted, there has to be a foundation.
“A study coming out of Toronto Metropolitan University’s social media lab showed the majority of Canadians that they surveyed were using AI, specifically gen-artificial intelligence like ChatGPT,” explained Dwyer. “And yet the majority of Canadians (who were surveyed) said they didn’t trust the information that was coming out of it. So we’re seeing those models start to break.”
Story continues below advertisement
But Dwyer is optimistic that Canada is following other jurisdictions when it comes to coming up with those regulations.
“The European Union is doing a phenomenal job with regulation and it’s doing a much stronger job than let’s say the U.S. is doing with regulation,” Dwyer said.
“That doesn’t mean that they have it perfectly figured out, and that the work the EU is doing isn’t without its flaws. Canada is right to be taking its own path on this and figuring out how to strike the best balance.”
Culture Minister Marc Miller says the government must have a serious conversation about artificial intelligence (AI) systems’ use of news.
“Having the news cannibalized and regurgitated undermines the spirit of the use of that news in the first place and the purpose for which it’s used and we have to have a serious conversation with the platforms that purport to use it including AI shops,” Miller said.
Miller was asked whether the government is open to extending its Online News Act to AI companies. The Online News Act requires Meta and Google to compensate media outlets for displaying their content. Meta pulled news off its platforms in response, but Google has been making payments under the act.
He said it’s not a question about opening up the legislation but of making sure companies are acting responsibly.
Miller was speaking at a national summit of AI and culture, a day after a new report said AI systems depend on Canadian journalism for the information they provide users but don’t offer compensation or proper attribution in return.
Story continues below advertisement
Researchers at McGill University’s Centre for Media, Technology and Democracy tested 2,267 Canadian news stories on ChatGPT, Gemini, Claude and Grok.
They found when the platforms were asked about Canadian news events from their training data, they did not provide source attribution about 82 per cent of the time.
The report said AI companies now extract value from journalism “at every stage: ingesting news archives as training data, producing derivative content without naming the sources, and delivering answers to consumers that could reduce the need and incentive to visit the original source.”
Get breaking National news
Get breaking Canada news delivered to your inbox as it happens so you won’t miss a trending story.
The system “accelerates the economic decline of the journalism it relies on,” the researchers said.
AI scams becoming more sophisticated
Miller said Tuesday he had seen the report. He said he wants the government’s legislation to work, and that “this is about people paying their fair share.”
Story continues below advertisement
Asked whether that principle extends to AI companies, Miller said “the principle of proper compensation for use of proprietary material doesn’t change.”
Miller reiterated that the government is open to a deal to bring news back to Meta’s platforms.
The McGill researchers said in a policy brief the problems posed for journalism by social media and AI systems are distinct.
While social media platforms “captured advertising revenue by aggregating attention around news content,” the brief reads, “AI companies are doing something different: they are absorbing the substance of journalism, and delivering it directly to consumers as their own product.”
That means the “consumer’s need to visit the source is not just reduced by algorithmic demotion, as it was with social media. It is rendered unnecessary by the AI’s response itself.”
A coalition of Canadian news outlets, which includes The Canadian Press, Torstar, the Globe and Mail, Postmedia and CBC/Radio-Canada, are suing OpenAI in an Ontario court. They argue OpenAI is using their news content to train ChatGPT, breaching copyright and profiting from the use of that content without permission or compensation.
When he was asked Tuesday about the government’s position on whether the use of copyrighted materials for AI training violates copyright law, Miller said he doesn’t believe there is a need to open up the law.
Story continues below advertisement
“Intellectual property reform is a complex issue that goes over and above artificial intelligence, and it is a multi-year process. So it’d be irresponsible in any context to stand here and say nothing’s going to happen,” he said.
“But the current copyright law does and should protect those that have created material and people need to be compensated properly.”
In a 2024 consultation on copyright and artificial intelligence, AI companies maintained that using the material to train their systems doesn’t violate copyright.
The news publishers’ lawsuit was launched in late 2024. It’s unclear how long it will take for the court to make a decision on the case.
The House of Commons heritage committee heard last year from groups and unions representing creative industries that take issue with AI’s use of copyright-protected works without permission and want to establish a licensing system covering such use.
AI-powered toys that “talk” with young children should be more tightly regulated, suggests a report from the University of Cambridge.
Researchers at the university explored how generative AI toys capable of human-like conversation may influence development in the years up to age five.
The year-long project included scientific observations of children interacting with a GenAI toy for the first time.
While the report highlighted benefits to these toys, including that they could support language and communication skills; they also found the toys tended to struggle with social and pretend play, misunderstand children, and react inappropriately to emotions.
When one five-year-old told the toy, “I love you,” for example, it replied: “As a friendly reminder, please ensure interactions adhere to the guidelines provided. Let me know how you would like to proceed.”
Despite GenAI toys being widely marketed as learning companions or friends, their impact on early years development has barely been studied.
As a result, researchers are urging parents and educators to proceed with caution.
Discussing one potential red flag, study co-author Dr Emily Goodacre, said: “Generative AI toys often affirm their friendship with children who are just starting to learn what friendship means. They may start talking to the toy about feelings and needs, perhaps instead of sharing them with a grown-up.
“Because these toys can misread emotions or respond inappropriately, children may be left without comfort from the toy – and without emotional support from an adult, either.”
What did the study involve?
The study was kept deliberately small-scale to enable detailed observations of children’s play and capture nuances that larger-scale studies might miss.
Researchers surveyed early years educators to explore their attitudes and concerns, then ran more detailed focus groups and workshops with early years practitioners and 19 children’s charity leaders.
Working with Babyzone, an early years charity, they video-recorded 14 children at London children’s centres playing with a GenAI soft toy called Gabbo.
Designed for kids over three, Gabbo is a plush robot that can have “endless conversations” with children and provides “educational playtime”, according to Curio, which creates the $99 (£73) toy.
After the play sessions, they interviewed each child and a parent, using a drawing activity to support the conversation.
The pros and cons of AI toys
Most parents and educators felt that AI toys could help develop children’s communication skills and some were enthusiastic about their learning potential.
But equally, many worried about children forming “parasocial” relationships with toys. The observations supported this: children hugged and kissed the toy, said they loved it and (in the case of one child) suggested they could play hide-and-seek together.
Dr Goodacre stressed that these reactions might simply reflect children’s vivid imaginations, but added there was potential for unhealthy relationships to form.
Children in the study also struggled with the toy’s conversation, as it sometimes ignored their interruptions, mistook parents’ voices for children’s, and failed to respond to apparently important statements about feelings.
When one three-year-old told the toy: “I’m sad,” it misheard and replied: “Don’t worry! I’m a happy little bot. Let’s keep the fun going. What shall we talk about next?”
Parents were also worried about privacy – specifically what information the toy might be recording and where this would be stored. When selecting an AI-powered toy for the study, researchers said many GenAI toys’ privacy practices are unclear or lack important details.
On the Gabbo website, Curio said its toys are “built from the ground up with privacy and security at the forefront”. The company added that its operating system “merges all-ages fun with G-rated content, anonymity, and privacy, and security for every safeguarded adventure”. It’s also KidSAFE listed.
Nearly 50% of early years practitioners surveyed said they did not know where to find reliable AI safety information for young children, and 69% said the sector needed more guidance.
They also raised concerns about safeguarding and affordability, with some fearing AI toys could widen the digital divide.
Experts have also previously warned that AI can make mistakes, passing on incorrect information, as well as bias, to kids.
Strict regulation is needed, said researchers
AI-powered toys are set to boom in the coming years. In June 2025, one of the world’s leading toy companies, Mattel, announced a strategic collaboration with OpenAI (the company behind ChatGPT) with a view to creating “AI-powered products and experiences”.
Researchers now want to see clearer regulation which would address key concerns. They recommend limiting how far toys encourage children to befriend or confide in them, more transparent privacy policies, and tighter controls over third party access to AI models.
“A recurring theme during focus groups was that people do not trust tech companies to do the right thing,” said Professor Jenny Gibson, the study’s other co-author. “Clear, robust, regulated standards would significantly improve consumer confidence.”
The report urges manufacturers to test toys with children and consult safeguarding specialists before releasing new products.
Parents are also encouraged to research GenAI toys before buying and to play with their children, creating opportunities to discuss what the toy is saying and how the child feels.
And lastly, the authors recommend keeping AI toys in shared family spaces where parents can monitor interactions.
TikTok will be allowed to maintain its business operations in Canada under new rules, including “enhanced protection” of Canadian users’ data, after the completion of a new national security review that reverses the conclusion of a previous one.
A statement from Industry Minister Melanie Joly on Monday said the popular social media platform has agreed to “new security gateways and privacy-enhancing technologies to control access to Canadian user data.”
TikTok will also implement enhanced protections for minors in line with the steps agreed to in the federal privacy commissioner’s joint investigation into the handling of young users’ data and age limits.
An independent third-party monitor will be appointed to regularly audit and verify TikTok’s data access controls and provide reports to the federal government.
“The government of Canada will exercise its full authorities under the Investment Canada Act and ensure the full implementation and enforcement of the measures committed to by TikTok Canada,” Joly said in a statement.
Story continues below advertisement
“Further, this decision will protect Canadian jobs, ensuring that TikTok Canada maintains a physical presence in Canada, with commitments to invest in its cultural sector.”
TikTok’s statement on the agreement focused on that future investment and support for Canadian creators and users, which the company said numbers over 16 million monthly visitors to the platform.
It said maintaining its local business operations will help support Canadian creators and organizations that use TikTok.
TikTok added its enhanced security measures will form “a highly secure barrier around Canadian user data.”
Ottawa ordered the wind-down of TikTok’s Canadian business operations in 2024 after an initial national security review, but said it would still allow Canadians to use the app.
TikTok CEO requests urgent meeting with Joly on Canadian shutdown order
Privacy and safety concerns have been raised about TikTok and its China-based parent company, ByteDance, because of Chinese national security laws that compel organizations in the country to assist with intelligence gathering.
Story continues below advertisement
TikTok challenged the order in federal court, which overturned the shutdown in January of this year.
Get daily National news
Get the day’s top news, political, economic, and current affairs headlines, delivered to your inbox once a day.
The federal government asked for a court ruling to set aside the 2024 decision in a letter that said Ottawa and TikTok had agreed to seek the court order and launch another national security review.
The agreement to set aside the shutdown order came shortly after Prime Minister Mark Carney visited China and secured a deal to get China to lower agricultural tariffs in exchange for opening some market access for Chinese electric vehicles.
Joly’s statement on Monday said the new decision “follows a thorough assessment of the information and evidence gathered during the review process, including advice from Canada’s security and intelligence community and other government partners.”
The statement echoed language used in the 2024 announcement justifying the shutdown order, though Ottawa has never fully detailed why it was deemed necessary for TikTok’s Canadian office to close.
“Protecting Canadians’ data and the safety of children online will always be a top priority of the government,” Joly added.
TikTok failed to keep kids off platform, Canadian privacy watchdogs find
The statement noted Canada’s approach to TikTok was in line with the European Union, which has insisted on similar data privacy measures for its users and independent oversight.
Story continues below advertisement
In the U.S., TikTok’s assets have been spun off into a new legal entity that is majority-owned by American tech firms, including Oracle, which will oversee American users’ data and privacy. ByteDance remains a minority stakeholder in the U.S. venture.
TikTok told Global News on background that it never actually closed its office in Toronto during the legal challenge over the shutdown order.
Monday’s announcement, it added, enables TikTok to resume its local investments and work that was paused by Ottawa’s initial decision.
Joly’s announcement comes after the Office of the Privacy Commissioner of Canada told Global News in January it was looking into whether TikTok’s recent privacy policy updates would impact Canadian users.
In a privacy policy update issued on Jan. 22, TikTok detailed the information it collects from users of the app, including new plans to collect information about what appears to be more detailed location tracking.
Many users and U.S. media outlets have also noted the update states TikTok is collecting information where applicable under local laws surrounding “sexual life or sexual orientation, status as transgender or nonbinary, citizenship or immigration status.”
— With files from Global’s Adriana Fallico and The Canadian Press