Countries around the world are considering teen social media bans – why experts warn it’s a ‘lazy’ fix


Gen Z girl looking at smartphone screen feeling upset scrolling on social media.

Mementojpeg | Moment | Getty Images

Governments around the world are making efforts to crack down on teen social media use amid mounting evidence of potential harms, but critics argue blanket bans are an ineffective quick fix.

Australia became the first country to enforce a sweeping social media ban for under-16s in December, requiring platforms like Meta’s Instagram, ByteDance’s TikTok, Alphabet’s YouTube, Elon Musk’s X, and Reddit to implement age verification measures or face penalties.

Several European countries are now looking to follow Australia’s lead, with the U.K., Spain, France, and Austria drafting their own proposals. Although a national ban in the U.S. looks unlikely, state-level legislation is underway.

Countries around the world are considering teen social media bans – why experts warn it’s a ‘lazy’ fix

It comes after Meta, the parent company of Facebook, Instagram and Threads, faced two separate defeats in trials related to child safety and social media harms in March.

A Santa Fe jury found Meta misled users about child safety on its apps. The next day, a Los Angeles jury ruled that Meta and YouTube designed platform features that contributed to a plaintiff’s mental health harms.

Meta CEO and Chairman Mark Zuckerberg arrives at Los Angeles Superior Court ahead of the social media trial tasked to determine whether social media giants deliberately designed their platforms to be addictive to children, in Los Angeles, on Feb. 18, 2026.

Meta’s stock drops almost 8% as 2 court defeats add to Zuckerberg’s recent woes

These developments are set to “unleash a lot more legislation,” Sonia Livingstone, social psychology professor and director of the London School of Economics’ Digital Futures for Children center, told CNBC.

However, Livingstone said a social media ban for teens is a slapdash solution from governments that have failed to properly police tech giants for years.

“I think the argument for a ban is an admission of failure that we cannot regulate companies, so we can only restrict children,” she said, explaining that the U.S. and Europe already have a lot of legislation in the books that isn’t being enforced.

“When are governments really going to enforce, raise the stakes on fines, ban the companies if necessary for not complying,” she added.

Enforce existing laws

Experts argue the sector has for too long escaped accountability and the rigid requirements faced by other industries.

“[Governments] should be implementing the law [and] big tech companies should be facing a slew of regulatory interventions that forbid a whole series of practices that they currently do,” Livingstone said.

She highlighted the U.K.’s Online Safety Act, which “requires safety by design” — this means features such as Snapchat’s “Quick Add” that invite teens to befriend others should be stopped, according to Livingstone.

Livingstone believes that a blanket ban wouldn’t even be under discussion if social media companies had undergone appropriate premarket testing to establish if their features are safe for their target audience.

“There are lots of areas where we have a well functioning market that requires testing to establish it meets the standards…[before products] can go into the market,” she said. “If we did that for AI and for social media, we would be in a whole different place and we’d not be having to talk about banning children from anything.”

Josh Golin, executive director at Boston-based non-profit Fairplay for Kids, told CNBC that he’d like to see “privacy and safety by design legislation rather than blanket bans” across the U.S.

This includes passing the Children and Teen Online Privacy Protection Act to put a stop to personal data-driven advertising towards children, so there’s “less financial incentive for social media companies to target and addict kids.”

Golin added that passing the Senate’s version of the Kids Online Safety Act (KOSA) is also key to ensuring platforms are held legally responsible for design features that can cause addiction or other harms.

He added that Meta has already successfully lobbied to stop KOSA even though it passed the Senate in 2024. But, if it continues to block legislation further, Golin thinks this could see further pressure “line up behind bans because addictive and unsafe is not OK.”

Regulatory pressure to follow after landmark social media verdict: Legal Analyst

A ban is ‘lazy’ and ‘unfair’

A sweeping social media ban only punishes a generation of young people who have become increasingly dependent on online means of interaction, according to Livingstone. She said bans are a “lazy” solution from governments and an “unfair” outcome for young people.

“It’s the 15 years in which we don’t let our children go outside and meet their friends. It’s the 15 years in which we stopped funding parks and youth clubs for them to meet in,” she said.

“So a ban now is to say to ‘Children, we can’t make the regulation work. We can’t update it fast enough. We haven’t built you anything else to do, but that’s just tough. We’ve terrified your parents into feeling that there’s nothing they can do, and we’re going to take you away from the service where you hoped you would feel some sociability and entertainment.”

A young woman wearing headphones browses vintage vinyl records in a store.

A ‘quiet revolution’: Why young people are swapping social media for lunch dates, vinyl records and brick phones
Choose CNBC as your preferred source on Google and never miss a moment from the most trusted name in business news.


Meta’s court losses spell potential trouble for AI research, consumer safety


Meta CEO Mark Zuckerberg leaves the Federal Courthouse in downtown Los Angeles after defending the company in a landmark social media addiction trial in Los Angeles, United States, on February 19, 2026.

Jon Putman | Anadolu | Getty Images

Over a decade ago, Meta – then known as Facebook – hired social science researchers to analyze how the social network’s services were affecting users. It was a way for the company and its peers to show they were serious about understanding the benefits and potential risks of their innovations. 

But as Meta’s court losses this week illustrate, the researchers’ work can become a liability. Brian Boland, a former Facebook executive who testified in both trials — one in New Mexico and the other in Los Angeles — says the damning findings from Meta’s internal research and documents seemed to contradict the way the company portrayed itself publicly. Juries in the two trials determined that Meta inadequately policed its site, putting kids in harm’s way. 

Mark Zuckerberg’s company began clamping down on its research teams a few years ago after a Facebook researcher, Frances Haugen, became a prominent whistleblower. The newer crop of tech companies, like OpenAI and Anthropic, subsequently invested heavily in researchers and charged them with studying the impact of modern AI on users and publishing their findings. 

With AI now getting outsized attention for the harmful effects it’s having on some users, those companies must ask if it’s in their best interest to continue funding research or to suppress it. 

“There was a period of time when there were teams that were created internally who could start to look at things and, for a brief window, you had some absolutely outstanding researchers who were looking at what was happening on these products with a little bit more free rein than I understand they have today,” Boland said in an interview.

Meta’s two defeats this week centered on different cases but they had a common theme: The company didn’t share what it knew about its products’ harms with the general public.

Meta’s court losses spell potential trouble for AI research, consumer safety

Jury members had to evaluate millions of corporate documents, including executive emails, presentations and internal research conducted by Meta’s staff. The documents included internal surveys appearing to show a concerning percentage of teenage users receiving unwanted sexual advances on Instagram. There was also research, which Meta eventually halted, implying that people who curbed their use of Facebook became less depressed and anxious.

Plaintiffs’ attorneys in the cases didn’t rely solely on internal research to make their arguments, but those studies helped bolster their positions about Meta’s alleged culpability. Meta’s defense teams argued that certain research was old, taken out of context and misleading, presenting a flawed view of how the company operates and how it views safety.

‘Both sides of the story’

Frances Haugen, former Facebook employee, speaks during a hearing of the Committee on Energy and Commerce Subcommittee on Communications and Technology on Capitol Hill December 1, 2021, in Washington, DC.

Brendan Smialowski | AFP | Getty Images

Haugen’s “disclosures were a significant turning point globally – not just for the companies themselves but for researchers, policymakers and the broader public,” said Kate Blocker, director of research and program at the nonprofit Children and Screens: Institute of Digital Media and Child Development.

The leaks also led to major changes at Meta and in the tech industry, which began to weed out research that could be viewed as counterproductive for the companies. Many teams studying alleged harms and related issues were cut, CNBC previously reported.

Some companies also began removing certain tools and features of their services that third-party researchers utilized to study their platforms.

 “Companies may now view ongoing research as a liability, but independent, third-party research must continue to be supported,” Blocker said.

Much of the internal research used in this week’s trials didn’t contain new revelations, and many of the documents had already been released by other whistleblowers, said Sacha Haworth, executive director of the Tech Oversight Project. What the trials added, Haworth said, were “the very emails, the very words, the very screenshots, the internal marketing presentations, the memos” that offered necessary context.

As the tech industry now pushes aggressively into AI, companies like Meta, OpenAI, and Google have been prioritizing products over research and safety. It’s a trend that concerns Blocker, who said that, “much like with social media before it, there is limited public visibility into what AI companies are studying about their products.”

“AI companies seem to be mostly studying the models themselves – model behavior, model interpretability, and alignment – but there is a significant gap in research regarding the impact of chatbots and digital assistants on child development,” Blocker said. “AI companies have a chance to not repeat the mistakes of the past – we urgently need to establish systems of transparency and access that share what these companies know about their platforms with the public and support further independent evaluation.”

WATCH: Regulatory pressure to follow after landmark social media verdict.

Regulatory pressure to follow after landmark social media verdict: Legal Analyst
Choose CNBC as your preferred source on Google and never miss a moment from the most trusted name in business news.


Pinterest stock sinks nearly 17% as tariffs hit earnings. Here’s what’s happening


Pinterest stock sinks nearly 17% as tariffs hit earnings. Here’s what’s happening

Pinterest shares closed nearly 17% lower on Friday, after the company cited tariff-related shocks in disappointing fourth-quarter earnings.

The social media company’s Q4 earnings came in below analysts’ expectations, with revenue of $1.32 billion compared with LSEG consensus estimates of $1.33 billion. Net income for the quarter plunged 85% to $277 million from $1.85 billion the prior year.

It also recorded $541.5 million in adjusted earnings before interest, taxes, depreciation, and amortization, or EBIDTA, below the $550 million that analysts were projecting.

Pinterest expects first-quarter sales to be between $951 million and $971 million, which is also below analysts’ forecasts of $980 million.

CEO Bill Ready said the company “absorbed an exogenous shock this year related to tariffs” and was more exposed to reduced advertising spend from large retailers.

Pinterest also announced plans in January to lay off less than 15% of its workforce and cut back on office space, in a bid to go all in on AI. It said it’s “reallocating resources” to AI-focused teams and prioritizing “AI-powered products and capabilities.”

Stock Chart IconStock chart icon

hide content

Pinterest one-day stock chart.

What analysts are saying

In a Friday note, Citi said it was downgrading shares of Pinterest from Buy to Neutral, “given more limited visibility from larger UCAN & EU advertisers due in part to tariffs and challenges across specific verticals,” such as home furnishing, the rebuilding of its go-to-market sales function as Pinterest broadens its advertiser base, and greater investments impacting margins.

Pinterest’s revenue performance is expected to continue to be “pressured near-term by macro-related headwinds,” such as tariffs and consumer spending, Goldman Sachs analysts said in a note on Friday.

But they added: “Despite these near-term headwinds, management remains optimistic around its long-term growth strategy centered around diversifying its advertiser base, automation, and performance-oriented objectives.

The analysts noted that user growth remains particularly strong amongst Gen Z users.

The company reported that its fourth-quarter global monthly active users jumped 12% year-over-year to 619 million, representing an all-time high. 

— CNBC’s Jonathan Vanian contributed to this report