Countries around the world are considering teen social media bans – why experts warn it’s a ‘lazy’ fix


Gen Z girl looking at smartphone screen feeling upset scrolling on social media.

Mementojpeg | Moment | Getty Images

Governments around the world are making efforts to crack down on teen social media use amid mounting evidence of potential harms, but critics argue blanket bans are an ineffective quick fix.

Australia became the first country to enforce a sweeping social media ban for under-16s in December, requiring platforms like Meta’s Instagram, ByteDance’s TikTok, Alphabet’s YouTube, Elon Musk’s X, and Reddit to implement age verification measures or face penalties.

Several European countries are now looking to follow Australia’s lead, with the U.K., Spain, France, and Austria drafting their own proposals. Although a national ban in the U.S. looks unlikely, state-level legislation is underway.

Countries around the world are considering teen social media bans – why experts warn it’s a ‘lazy’ fix

It comes after Meta, the parent company of Facebook, Instagram and Threads, faced two separate defeats in trials related to child safety and social media harms in March.

A Santa Fe jury found Meta misled users about child safety on its apps. The next day, a Los Angeles jury ruled that Meta and YouTube designed platform features that contributed to a plaintiff’s mental health harms.

Meta CEO and Chairman Mark Zuckerberg arrives at Los Angeles Superior Court ahead of the social media trial tasked to determine whether social media giants deliberately designed their platforms to be addictive to children, in Los Angeles, on Feb. 18, 2026.

Meta’s stock drops almost 8% as 2 court defeats add to Zuckerberg’s recent woes

These developments are set to “unleash a lot more legislation,” Sonia Livingstone, social psychology professor and director of the London School of Economics’ Digital Futures for Children center, told CNBC.

However, Livingstone said a social media ban for teens is a slapdash solution from governments that have failed to properly police tech giants for years.

“I think the argument for a ban is an admission of failure that we cannot regulate companies, so we can only restrict children,” she said, explaining that the U.S. and Europe already have a lot of legislation in the books that isn’t being enforced.

“When are governments really going to enforce, raise the stakes on fines, ban the companies if necessary for not complying,” she added.

Enforce existing laws

Experts argue the sector has for too long escaped accountability and the rigid requirements faced by other industries.

“[Governments] should be implementing the law [and] big tech companies should be facing a slew of regulatory interventions that forbid a whole series of practices that they currently do,” Livingstone said.

She highlighted the U.K.’s Online Safety Act, which “requires safety by design” — this means features such as Snapchat’s “Quick Add” that invite teens to befriend others should be stopped, according to Livingstone.

Livingstone believes that a blanket ban wouldn’t even be under discussion if social media companies had undergone appropriate premarket testing to establish if their features are safe for their target audience.

“There are lots of areas where we have a well functioning market that requires testing to establish it meets the standards…[before products] can go into the market,” she said. “If we did that for AI and for social media, we would be in a whole different place and we’d not be having to talk about banning children from anything.”

Josh Golin, executive director at Boston-based non-profit Fairplay for Kids, told CNBC that he’d like to see “privacy and safety by design legislation rather than blanket bans” across the U.S.

This includes passing the Children and Teen Online Privacy Protection Act to put a stop to personal data-driven advertising towards children, so there’s “less financial incentive for social media companies to target and addict kids.”

Golin added that passing the Senate’s version of the Kids Online Safety Act (KOSA) is also key to ensuring platforms are held legally responsible for design features that can cause addiction or other harms.

He added that Meta has already successfully lobbied to stop KOSA even though it passed the Senate in 2024. But, if it continues to block legislation further, Golin thinks this could see further pressure “line up behind bans because addictive and unsafe is not OK.”

Regulatory pressure to follow after landmark social media verdict: Legal Analyst

A ban is ‘lazy’ and ‘unfair’

A sweeping social media ban only punishes a generation of young people who have become increasingly dependent on online means of interaction, according to Livingstone. She said bans are a “lazy” solution from governments and an “unfair” outcome for young people.

“It’s the 15 years in which we don’t let our children go outside and meet their friends. It’s the 15 years in which we stopped funding parks and youth clubs for them to meet in,” she said.

“So a ban now is to say to ‘Children, we can’t make the regulation work. We can’t update it fast enough. We haven’t built you anything else to do, but that’s just tough. We’ve terrified your parents into feeling that there’s nothing they can do, and we’re going to take you away from the service where you hoped you would feel some sociability and entertainment.”

A young woman wearing headphones browses vintage vinyl records in a store.

A ‘quiet revolution’: Why young people are swapping social media for lunch dates, vinyl records and brick phones
Choose CNBC as your preferred source on Google and never miss a moment from the most trusted name in business news.


Italy investigates Sephora and Benefit over skincare marketing to children


A view of a Sephora beauty product store on May 30, 2025 in Sherman Oaks, California.

Justin Sullivan | Getty Images

Italian regulators are looking to clamp down on the tween skincare obsession and are investigating the LVMH-owned cosmetic brands Sephora and Benefit over an “insidious” marketing campaign to children.

The Italian Competition Authority (AGCM) said Friday that it has launched investigations into the two cosmetic brands centred on “unfair commercial practices,” which saw children and young people, even those under the age of 10, being encouraged to purchase serums, masks, and anti-ageing creams.

The regulator said the marketing is fuelling behavior known as “cosmeticorexia,” which refers to an unhealthy fixation on skincare amongst minors.

It emphasized that both Sephora and Benefit had failed to appropriately label products or omitted at times important precautions on products not intended for use by minors, both in-store and online on social media, which could cause serious harm to their health.

Additionally, AGCM said the popular cosmetic brands employed an “insidious marketing strategy” which involved young micro-influencers promoting other young people to buy their products.

AGCM officials and the Italian financial police carried out inspections of the premises of Sephora Italia, LVMH Profumi e Cosmetici Italia, and LVMH Italia on Thursday.

Italy investigates Sephora and Benefit over skincare marketing to children

Barbie who? Gen Alpha kids ‘obsessed’ with skin care could fuel holiday spending

LVMH said Sephora, Benefit, and LVMH P&C Italy had been notified of the investigation.

“As the investigation is ongoing, Sephora, Benefit and LVMH P&C Italy cannot share further comments at this stage, they express their willingness to fully cooperate with the authorities,” LVMH said in a statement to CNBC. “All the companies reaffirm their strict compliance with applicable Italian regulations.”

Sephora boasts nearly 23 million followers on Instagram and over 2 million followers on TikTok, with the beauty brand at the center of tween beauty trends.

The “Sephora kids” social media trend has gained traction over the past few years, with viral videos on TikTok and Instagram showing stores flooded with teenage girls loading up their baskets with brightly-coloured and fun-looking skincare products.

In some videos, young girls show off their skincare routines with products containing anti-ageing ingredients like retinol.

A CBS News analysis of 240 skincare posts from teen influencers on TikTok found that many of the videos hadn’t been properly tagged as promotional content, with only 15 videos, or just 6% of posts, doing so. This means many content creators may unintentionally be advertising products to unsuspecting children.

One teen skincare influencer, Embreigh Courtlyn, told CBS that some brands would ask her not to label videos with “#ad,” which could be off-putting to viewers, but instead be referred to as partners, which would enable the content to perform better.

A peer-reviewed study published by Northwestern University in June last year reviewed 100 popular skincare videos posted by influencers aged 7 to 18 years old. It found that only a quarter of the videos included sunscreen, while the top 25 most viewed videos had an average of 11 and a maximum of 21 potentially irritating active ingredients.

Social media bans

Countries around the world are considering teen social media bans – why experts warn it’s a ‘lazy’ fix
Choose CNBC as your preferred source on Google and never miss a moment from the most trusted name in business news.


AI chatbot firms face stricter regulation in online safety laws protecting children in the UK


Preteen girl at desk solving homework with AI chatbot.

Phynart Studio | E+ | Getty Images

The UK government is closing a “loophole” in new online safety legislation that will make AI chatbots subject to its requirement to combat illegal material or face fines or even being blocked.

After the country’s government staunchly criticized Elon Musk’s X over sexually explicit content created by its chatbot Grok, Prime Minister Keir Starmer announced new measures that mean chatbots such as OpenAI’s ChatGPT, Google’s Gemini, and Microsoft Copilot will be included in his government’s Online Safety Act.

The platforms will be expected to comply with “illegal content duties” or “face the consequences of breaking the law,” the announcement said.

This comes after the European Commission investigated Musk’s X in January for spreading sexually explicit images of children and other individuals. Starmer led calls for Musk to put a stop to it.

Keir Starmer, UK prime minster, during a news conference in London, UK, on Monday, Jan. 19, 2026.

Bloomberg | Bloomberg | Getty Images

Earlier, Ofcom, the UK’s media watchdog, began an investigation into X reportedly spreading sexually explicit images of children and other individuals.

“The action we took on Grok sent a clear message that no platform gets a free pass,” Starmer said, announcing the latest measures. “We are closing loopholes that put children at risk, and laying the groundwork for further action.”

Starmer gave a speech on Monday on the new powers, which extend to setting minimum age limits for social media platforms, restricting harmful features such as infinite scrolling, and limiting children’s use of AI chatbots and access to VPNs.

One measure announced would force social media companies to retain data after a child’s death, unless the online activity is clearly unrelated to the death.

“We are acting to protect children’s wellbeing and help parents to navigate the minefield of social media,” Starmer said.

Alex Brown, head of TMT at law firm Simmons & Simmons, said the announcement shows how the government is taking a different approach to regulating rapidly developing technology.

“Historically, our lawmakers have been reluctant to regulate the technology and have rather sought to regulate its use cases and for good reason,” Brown said in a statement to CNBC.

He said that regulations focused on specific technology can age quickly and risk missing aspects of its use. Generative AI is exposing the limits of the Online Safety Act, which focuses on “regulating services rather than technology,” Brown said.

He said Starmer’s latest announcement showed the UK government wanted to address the dangers “that arise from the design and behaviour of technologies themselves, not just from user‑generated content or platform features,” he added.

There’s been heightened scrutiny around children and teenagers’ access to social media in recent months, with lawmakers citing mental health and wellbeing harms. In December, Australia became the first country to implement a law banning teens under 16 from social media.

Australia’s ban forced apps like Alphabet’s YouTube, Meta’s Instagram, and ByteDance’s TikTok to have age-verification methods such as uploading IDs or bank details to prevent under-16s from making accounts.

Spain became the first European country to enforce a ban earlier this month, with France, Greece, Italy, Denmark, and Finland also considering similar proposals.

The UK government launched a consultation in January on banning social media for under-16s.

Additionally, the country’s House of Lords, an unelected upper legislative chamber, voted last month to amend the Children’s Wellbeing and Schools Bill to include a social media ban for under-16s.

The next phase will see the bill reviewed by parliament’s the House of Commons. Both houses have to agree on any changes before they pass into law.