Countries around the world are considering teen social media bans – why experts warn it’s a ‘lazy’ fix


Gen Z girl looking at smartphone screen feeling upset scrolling on social media.

Mementojpeg | Moment | Getty Images

Governments around the world are making efforts to crack down on teen social media use amid mounting evidence of potential harms, but critics argue blanket bans are an ineffective quick fix.

Australia became the first country to enforce a sweeping social media ban for under-16s in December, requiring platforms like Meta’s Instagram, ByteDance’s TikTok, Alphabet’s YouTube, Elon Musk’s X, and Reddit to implement age verification measures or face penalties.

Several European countries are now looking to follow Australia’s lead, with the U.K., Spain, France, and Austria drafting their own proposals. Although a national ban in the U.S. looks unlikely, state-level legislation is underway.

Countries around the world are considering teen social media bans – why experts warn it’s a ‘lazy’ fix

It comes after Meta, the parent company of Facebook, Instagram and Threads, faced two separate defeats in trials related to child safety and social media harms in March.

A Santa Fe jury found Meta misled users about child safety on its apps. The next day, a Los Angeles jury ruled that Meta and YouTube designed platform features that contributed to a plaintiff’s mental health harms.

Meta CEO and Chairman Mark Zuckerberg arrives at Los Angeles Superior Court ahead of the social media trial tasked to determine whether social media giants deliberately designed their platforms to be addictive to children, in Los Angeles, on Feb. 18, 2026.

Meta’s stock drops almost 8% as 2 court defeats add to Zuckerberg’s recent woes

These developments are set to “unleash a lot more legislation,” Sonia Livingstone, social psychology professor and director of the London School of Economics’ Digital Futures for Children center, told CNBC.

However, Livingstone said a social media ban for teens is a slapdash solution from governments that have failed to properly police tech giants for years.

“I think the argument for a ban is an admission of failure that we cannot regulate companies, so we can only restrict children,” she said, explaining that the U.S. and Europe already have a lot of legislation in the books that isn’t being enforced.

“When are governments really going to enforce, raise the stakes on fines, ban the companies if necessary for not complying,” she added.

Enforce existing laws

Experts argue the sector has for too long escaped accountability and the rigid requirements faced by other industries.

“[Governments] should be implementing the law [and] big tech companies should be facing a slew of regulatory interventions that forbid a whole series of practices that they currently do,” Livingstone said.

She highlighted the U.K.’s Online Safety Act, which “requires safety by design” — this means features such as Snapchat’s “Quick Add” that invite teens to befriend others should be stopped, according to Livingstone.

Livingstone believes that a blanket ban wouldn’t even be under discussion if social media companies had undergone appropriate premarket testing to establish if their features are safe for their target audience.

“There are lots of areas where we have a well functioning market that requires testing to establish it meets the standards…[before products] can go into the market,” she said. “If we did that for AI and for social media, we would be in a whole different place and we’d not be having to talk about banning children from anything.”

Josh Golin, executive director at Boston-based non-profit Fairplay for Kids, told CNBC that he’d like to see “privacy and safety by design legislation rather than blanket bans” across the U.S.

This includes passing the Children and Teen Online Privacy Protection Act to put a stop to personal data-driven advertising towards children, so there’s “less financial incentive for social media companies to target and addict kids.”

Golin added that passing the Senate’s version of the Kids Online Safety Act (KOSA) is also key to ensuring platforms are held legally responsible for design features that can cause addiction or other harms.

He added that Meta has already successfully lobbied to stop KOSA even though it passed the Senate in 2024. But, if it continues to block legislation further, Golin thinks this could see further pressure “line up behind bans because addictive and unsafe is not OK.”

Regulatory pressure to follow after landmark social media verdict: Legal Analyst

A ban is ‘lazy’ and ‘unfair’

A sweeping social media ban only punishes a generation of young people who have become increasingly dependent on online means of interaction, according to Livingstone. She said bans are a “lazy” solution from governments and an “unfair” outcome for young people.

“It’s the 15 years in which we don’t let our children go outside and meet their friends. It’s the 15 years in which we stopped funding parks and youth clubs for them to meet in,” she said.

“So a ban now is to say to ‘Children, we can’t make the regulation work. We can’t update it fast enough. We haven’t built you anything else to do, but that’s just tough. We’ve terrified your parents into feeling that there’s nothing they can do, and we’re going to take you away from the service where you hoped you would feel some sociability and entertainment.”

A young woman wearing headphones browses vintage vinyl records in a store.

A ‘quiet revolution’: Why young people are swapping social media for lunch dates, vinyl records and brick phones
Choose CNBC as your preferred source on Google and never miss a moment from the most trusted name in business news.


Microsoft closes worst quarter on Wall Street since 2008 on AI concerns: ‘Redmond is in a pickle’


Microsoft CEO Satya Nadella speaks at the Microsoft AI Tour event in Munich, Germany, on Feb. 25, 2026.

Sven Hoppe | Picture Alliance | Getty Images

Microsoft just closed out its worst quarter on Wall Street since the 2008 financial crisis, as investors soured on the software giant’s prospects in artificial intelligence.

The company’s stock plunged 23% in the first quarter, a steeper drop than any of its tech peers or the Nasdaq, which fell 7% in the period. Microsoft bounced back a bit on Tuesday, alongside a broader market rally, with shares of the company gaining 3.3%, the biggest jump since July.

While Microsoft remains dominant in workplace productivity software and through its Windows operating system, the company is facing twin pressures to grow efficiently in AI while also building out its cloud AI infrastructure to support soaring demand.

Oil prices are surging because of the Iran war, potentially driving up costs for building and running data centers. And on the product side, Copilot, Microsoft’s AI assistant, has yet to show a lot of traction as users flock to competitive services from Google, OpenAI and Anthropic.

Stock Chart IconStock chart icon

Microsoft closes worst quarter on Wall Street since 2008 on AI concerns: ‘Redmond is in a pickle’

Microsoft vs. Nasdaq this year

“Redmond is in a pickle,” wrote Ben Reitzes, an analyst at Melius Research, in a note on March 23, referring to Microsoft’s headquarters in Washington state. Reitzes, who has a hold rating on the stock, said the company has to use valuable capacity from its Azure cloud to fix Copilot, but has no choice “since Copilot is needed to maintain momentum in its most profitable and largest segment.”

Microsoft declined to comment.

Meanwhile, software stocks are getting pummeled as part of an AI-inspired “SaaSpocalypse” that has pushed names like Adobe, Atlassian and ServiceNow down more than 30% this year.

“Much of traditional SaaS is dying/in likely terminal decay,” Jason Lemkin, founder of SaaStr, wrote this week in a post on X, using the acronym for software as a service. In a blog post, he noted that earnings multiples for software trail the S&P 500.

Microsoft’s multiple hasn’t been this low since the fourth quarter of 2022, when OpenAI introduced ChatGPT, according to Capital IQ data.

Gil Luria, an analyst at DA Davidson, told CNBC that the sell-off isn’t justified, and he recommends buying shares. In the latest quarter, Microsoft reported revenue growth of almost 17%, accelerating from a year earlier.

“The dislocation in the fundamental performance of Microsoft and the stock performance of Microsoft, and the valuation of Microsoft, is the biggest it’s been in decades,” Luria said. He said he expects the company’s earnings growth to outpace the broader market this year.

“There is no stickier product in all of enterprise software than Microsoft Windows and Office,” he said.

Microsoft has been trying to build a larger revenue base from productivity software with the Microsoft 365 Copilot AI add-on, but so far, just 3% of commercial Office customers have licenses for it. Luria said he has access to 365 Copilot, but that he’s not a fan. More importantly, he said, Microsoft has pricing power with Office subscriptions. The company announced plans to raise prices in December.

Suleyman’s ‘demotion’

With Copilot struggling to win over users, Microsoft said two weeks ago that Mustafa Suleyman, the former co-founder of AI lab DeepMind who had been running Copilot development for consumers, will focus on building AI models. Microsoft has tasked former Snap executive Jacob Andreou with leading the Copilot experience for consumers and commercial clients.

“There is concern that the Microsoft 365 Copilot business has not lived up to quite their expectations, and that’s an area that could see new competitors,” said Kyle Levins, an analyst at Harding Loevner, which held $219 million in Microsoft shares at the end of December.

Levins took the shake-up involving Suleyman as good news. Others did not.

“Sure sounds like a demotion at best,” former Jane Street trader Agustin Lebron wrote on X. The change followed departures of prominent executives, including gaming chief Phil Spencer and Rajesh Jha, Microsoft’s highest-ranking productivity leader, who’s retiring.

Microsoft is still getting healthy growth out of Azure, which is second to Amazon Web Services in cloud infrastructure. Revenue in the division jumped 39% in the December quarter. Finance chief Amy Hood said in January that growth could have been in the 40s if the company had allocated all of its AI chips to Azure, rather than giving some to teams operating services such as Microsoft 365 Copilot.

Azure is benefiting from a massive backlog of business from OpenAI and Anthropic. Microsoft’s commercial remaining performance obligations at Azure more than doubled in the December quarter from a year earlier to $625 billion.

Microsoft CTO: OpenAI is our most important partner ever

It’s a reminder that, among tech’s hyperscalers, Microsoft was viewed as an early mover in generative AI due to its 2019 investment in OpenAI and strategic partnership with the startup. But the companies no longer have an exclusive arrangement when it comes to cloud infrastructure and are now competing in a number of areas.

In February, OpenAI announced a service called Frontier that the company said “helps enterprises build, deploy, and manage AI agents that can do real work.”

Microsoft CEO Satya Nadella has been wearing a brave face, promoting the company’s AI enhancements on social media.

“It’s a lot of intense competition, but it’s not so zero-sum, as some people make it out to be,” he said in January.

Aaron Foresman, managing director of equity research at Crawford Investment Counsel, a Microsoft investor, said Nadella’s continuing presence is crucial for the company that he’s been leading since replacing Steve Ballmer in 2014.

“We’ve got a lot of trust and confidence in Satya,” Foresman said.

WATCH: Bank of America’s Tal Liani talks reinstating Microsoft as a ‘buy’

Bank of America's Tal Liani talks reinstating Microsoft as a 'buy'
Choose CNBC as your preferred source on Google and never miss a moment from the most trusted name in business news.


Meta’s court losses spell potential trouble for AI research, consumer safety


Meta CEO Mark Zuckerberg leaves the Federal Courthouse in downtown Los Angeles after defending the company in a landmark social media addiction trial in Los Angeles, United States, on February 19, 2026.

Jon Putman | Anadolu | Getty Images

Over a decade ago, Meta – then known as Facebook – hired social science researchers to analyze how the social network’s services were affecting users. It was a way for the company and its peers to show they were serious about understanding the benefits and potential risks of their innovations. 

But as Meta’s court losses this week illustrate, the researchers’ work can become a liability. Brian Boland, a former Facebook executive who testified in both trials — one in New Mexico and the other in Los Angeles — says the damning findings from Meta’s internal research and documents seemed to contradict the way the company portrayed itself publicly. Juries in the two trials determined that Meta inadequately policed its site, putting kids in harm’s way. 

Mark Zuckerberg’s company began clamping down on its research teams a few years ago after a Facebook researcher, Frances Haugen, became a prominent whistleblower. The newer crop of tech companies, like OpenAI and Anthropic, subsequently invested heavily in researchers and charged them with studying the impact of modern AI on users and publishing their findings. 

With AI now getting outsized attention for the harmful effects it’s having on some users, those companies must ask if it’s in their best interest to continue funding research or to suppress it. 

“There was a period of time when there were teams that were created internally who could start to look at things and, for a brief window, you had some absolutely outstanding researchers who were looking at what was happening on these products with a little bit more free rein than I understand they have today,” Boland said in an interview.

Meta’s two defeats this week centered on different cases but they had a common theme: The company didn’t share what it knew about its products’ harms with the general public.

Meta’s court losses spell potential trouble for AI research, consumer safety

Jury members had to evaluate millions of corporate documents, including executive emails, presentations and internal research conducted by Meta’s staff. The documents included internal surveys appearing to show a concerning percentage of teenage users receiving unwanted sexual advances on Instagram. There was also research, which Meta eventually halted, implying that people who curbed their use of Facebook became less depressed and anxious.

Plaintiffs’ attorneys in the cases didn’t rely solely on internal research to make their arguments, but those studies helped bolster their positions about Meta’s alleged culpability. Meta’s defense teams argued that certain research was old, taken out of context and misleading, presenting a flawed view of how the company operates and how it views safety.

‘Both sides of the story’

Frances Haugen, former Facebook employee, speaks during a hearing of the Committee on Energy and Commerce Subcommittee on Communications and Technology on Capitol Hill December 1, 2021, in Washington, DC.

Brendan Smialowski | AFP | Getty Images

Haugen’s “disclosures were a significant turning point globally – not just for the companies themselves but for researchers, policymakers and the broader public,” said Kate Blocker, director of research and program at the nonprofit Children and Screens: Institute of Digital Media and Child Development.

The leaks also led to major changes at Meta and in the tech industry, which began to weed out research that could be viewed as counterproductive for the companies. Many teams studying alleged harms and related issues were cut, CNBC previously reported.

Some companies also began removing certain tools and features of their services that third-party researchers utilized to study their platforms.

 “Companies may now view ongoing research as a liability, but independent, third-party research must continue to be supported,” Blocker said.

Much of the internal research used in this week’s trials didn’t contain new revelations, and many of the documents had already been released by other whistleblowers, said Sacha Haworth, executive director of the Tech Oversight Project. What the trials added, Haworth said, were “the very emails, the very words, the very screenshots, the internal marketing presentations, the memos” that offered necessary context.

As the tech industry now pushes aggressively into AI, companies like Meta, OpenAI, and Google have been prioritizing products over research and safety. It’s a trend that concerns Blocker, who said that, “much like with social media before it, there is limited public visibility into what AI companies are studying about their products.”

“AI companies seem to be mostly studying the models themselves – model behavior, model interpretability, and alignment – but there is a significant gap in research regarding the impact of chatbots and digital assistants on child development,” Blocker said. “AI companies have a chance to not repeat the mistakes of the past – we urgently need to establish systems of transparency and access that share what these companies know about their platforms with the public and support further independent evaluation.”

WATCH: Regulatory pressure to follow after landmark social media verdict.

Regulatory pressure to follow after landmark social media verdict: Legal Analyst
Choose CNBC as your preferred source on Google and never miss a moment from the most trusted name in business news.


Meta must pay $375 million for violating New Mexico law in child exploitation case, jury rules


A New Mexico state court jury on Tuesday held Meta liable for nearly $400 million in civil damages after a trial where the state attorney general accused the Facebook and Instagram operator of failing to safeguard kids who use its apps from child predators.

The civil trial, which began with opening arguments in Santa Fe last month, centered on allegations that Meta violated state consumer protections laws and misled residents about the safety of apps like Facebook and Instagram. New Mexico attorney general Raúl Torrez sued Meta in 2023 following an undercover operation involving the creation of a fake social media profile of a 13-year-old girl that he previously told CNBC “was simply inundated with images and targeted solicitations” from child abusers.

Deliberations began Monday, and jurors were tasked with ruling in favor or against the defendant Meta. Jury members found that Meta willfully violated the state’s unfair practices act, and decided the company should pay $375 million in damages based on the number of violations.

Linda Singer, an attorney representing New Mexico, urged jury members during closing statements to impose a civil penalty against Meta that could top $2 billion.

“We respectfully disagree with the verdict and will appeal,” a Meta spokesperson said. “We work hard to keep people safe on our platforms and are clear about the challenges of identifying and removing bad actors or harmful content. We will continue to defend ourselves vigorously, and we remain confident in our record of protecting teens online.”

Meta denied the state of New Mexico’s allegations and previously said that it is “focused on demonstrating our longstanding commitment to supporting young people.”

“The jury’s verdict is a historic victory for every child and family who has paid the price for Meta’s choice to put profits over kids’ safety,” Torrez said in a statement. “Meta executives knew their products harmed children, disregarded warnings from their own employees, and lied to the public about what they knew. Today the jury joined families, educators, and child safety experts in saying enough is enough.”

When the New Mexico trial’s second phase, conducted without a jury, commences on May 4, a judge will determine whether Meta created a public nuisance and should fund public programs intended to address the alleged harms. The state’s lawyers are also urging Meta to implement changes to its apps and operations, including “enacting effective age verification, removing predators from the platform, and protecting minors from encrypted communications that shield bad actors.”

During the trial, New Mexico prosecutors revealed legal filings detailing internal messages from Meta employees discussing how CEO Mark Zuckerberg’s 2019 announcement to make Facebook Messenger end-to-end encrypted by default would impact the ability to disclose to law enforcement some 7.5 million child sexual abuse material reports.

In an interview with CNBC on Tuesday before the verdict was revealed, Torrez discussed Meta’s argument that the prosecutors cherry picked certain materials to paint an unfair picture about the company, and that Meta has been updating its various apps with safety features.

Torrez said he didn’t think that the jury would “be convinced that they’ve done as much as they can or should have, and that they should be held responsible for it.”

“One of the things that I am really focused on is how we can change the design features of these products, at least within New Mexico, and that would create a standard that could then be modeled elsewhere in the country, and, frankly, around the world,” Torrez said during the sidelines of the Common Sense Summit held in San Francisco.

Torrez said that a similar child-exploitation related suit involving Snap, filed by his office in 2024, is still in the discovery stages and that his team was “able to overcome section 230 motions” in both the Meta and Snap case. The tech industry has argued that the Section 230 provision of the Communications Decency Act should prevent them from being held liable for content shared on their respective services, resulting in prosecutors testing new legal strategies focusing on the design of the apps instead.

Regarding Meta’s criticism that prosecutors are picking certain corporate documents and related materials, Torrez said, “What’s interesting is they accuse us of doing that, but all we’re doing is showing the world what they knew behind closed doors and weren’t willing to tell their users.”

The New Mexico case is one of multiple social media-related trials taking place this year that experts have compared to the Big Tobacco suits from the 1990s due in part to allegations that the companies misled the public about the safety and potential harms of their products.

Jury members in a separate, personal injury trial involving Meta and Google’s YouTube have been deliberating in a Los Angeles Superior court since last Friday. The companies are alleged to have misled the public about the safety and design of their respective apps. The jury must determine whether one or both of the companies implemented certain design features that contributed to the mental distress of a plaintiff who alleged that she became addicted to social media apps when she was underage.

A separate federal trial in the Northern District of California will commence later this year. Multiple school districts and parents across the nation allege that that the actions and apps of Meta, YouTube, TikTok and Snap caused negative mental health-related harms to teenagers and children.

WATCH: Would be surprised in Meta workforce cuts are as big as reported, says Evercore’s Mark Mahaney.


Elon Musk misled Twitter investors ahead of $44 billion acquisition, jury says


Elon Musk arrives at federal court on March 4, 2026 in San Francisco, California.

Josh Edelson | Getty Images

A jury in California found that Elon Musk defrauded Twitter shareholders during the runup to his $44 billion acquisition of the social media company, according to a verdict issued on Friday.

Total damages could reach up to $2.6 billion, attorneys for the plaintiffs said.

The class action lawsuit, Pampena v. Musk, was originally filed in October 2022, after Musk completed his purchase of Twitter for $54.20 per share. He later renamed the company X, before merging it with his artificial intelligence company xAI, and then with SpaceX, his reusable rocket manufacturer.

“This is a great example of what you cannot do to the average investor — people that have 401ks, kids, pension funds, teachers, firemen, nurses,” Joseph Cotchett, an attorney for the Twitter investors, told CNBC at the San Francisco courthouse. “That’s what this case was all about. This was not about Musk. It was about the whole operation.”

In an emailed statement, Musk attorneys with Quinn Emanuel said, “We view today’s verdict, where the jury found both for and against the plaintiffs and found no fraud scheme, as a bump in the road. And we look forward to vindication on appeal.”

After Musk bid to buy Twitter in April 2022, his sentiment towards the deal quickly soured as he cast doubt on the company’s claimed level of bots, spam and fake accounts on its platform. Musk wrote in a tweet the following month that his acquisition was “temporarily on hold” until Twitter’s CEO could prove its inauthentic account levels were around the 5% reported in the company’s SEC filings.

Musk’s tweets and additional comments sent shares of Twitter sliding by almost 10% in a single session. The jury deliberated for four days and unanimously found that Musk’s tweets on May 13 and May 17 were materially false or misleading.

Former Twitter shareholders, including retail investors and options traders, argued that Musk’s remarks amounted to a scheme to pressure the company’s board to sell to him for a lower price than his original offer. They claimed he was motivated by stock price declines at Tesla, which would require him to sell even more shares in the automaker than he’d intended in order to finance the buyout.

The plaintiffs in the suit said they sold shares below $54.20 following and in response to Musk’s posts and comments during press interviews. The potential damages figure is based on expert estimates of how much Musk’s flip-flopping affected the share price during the class period.

Attorneys for the Twitter investors said it will be about 90 days before claims administration is set up, and it will then take a couple of months for the government to process claims and for investors to begin to recoup some of their losses.

Musk’s attorneys argued their client’s remarks were based on well-founded concerns about bots, spam and fake accounts on Twitter, and did not amount to securities fraud or a scheme to depress the company’s stock price.

The jury said that though Musk had made false and misleading statements that harmed some Twitter shareholders, he did not engage in a specific scheme to defraud investors.

While the verdict marks a stinging rebuke for Musk, the financial implications are minimal considering his net worth, which currently sits at about $650 billion, according to Bloomberg.

WATCH: Why Tesla is pivoting

Elon Musk misled Twitter investors ahead of  billion acquisition, jury says
Choose CNBC as your preferred source on Google and never miss a moment from the most trusted name in business news.


Microsoft shakes up Copilot AI leadership team, freeing up Suleyman to build new models


Microsoft AI CEO Mustafa Suleyman speaks during an event highlighting Microsoft Copilot, the company’s AI tool, on April 4, 2025 in Redmond, Washington. The company also celebrated its 50th anniversary.

Stephen Brashear | Getty Images News | Getty Images

Microsoft said Tuesday that it’s bringing together the engineering groups for its commercial and consumer Copilot assistants, which have yet to gain broad adoption.

Jacob Andreou, a former Snap executive who works in Microsoft’s artificial intelligence unit, will become an executive vice president in charge of the consumer and commercial Copilot experience, CEO Satya Nadella wrote in a memo to employees.

Andreou will report to Nadella. Executives Ryan Roslansky, Perry Clarke and Charles Lamanna, who will also report to Nadella, will lead Microsoft 365 applications and the Copilot platform, Nadella wrote.

The Copilot moves will free up executive Mustafa Suleyman, a former co-founder of AI lab DeepMind that Google bought in 2014, to focus more on building new models.

“The next phase of this plan is to restructure our organization to enable me to focus all my energy on our Superintelligence efforts and be able to deliver world class models for Microsoft over the next 5 years,” Suleyman wrote in a memo. “These models will enable us to build enterprise tuned lineages that help improve all our products across the company.”

Since arriving at Microsoft through the Inflection deal in 2024, Suleyman has spent time working on Copilot for consumers, among other initiatives.

Microsoft’s Copilot app had 6 million daily active users in February, while OpenAI’s ChatGPT had 440 million and Google’s Gemini had 82 million, according to data from app analytics company Sensor Tower.

Sensor Tower said that so far in March, Anthropic’s Claude, which has gotten extensive media attention because of Anthropic’s standoff with the U.S. Department of Defense, has reached 9 million daily users, while Copilot still stands at 6 million.

Microsoft incorporates generative AI models from Anthropic and OpenAI. About 3% of commercial users with Office productivity software subscriptions have access to the Microsoft 365 Copilot add-on. Google is pushing Gemini to both consumers and corporations.

In November, Microsoft announced the formation of a superintelligence group under Suleyman, who said Tuesday that frontier model development has always been his main focus and passion.

He said he will “stay directly involved in much of the day-to-day operation” of the broad Microsoft AI group that includes products such as the Bing search engine.

Google controlled 90% of search engine market share in February, while Bing had about 5%, according to estimates from web analytics company StatCounter.

“We are doubling down on our superintelligence mission with the talent and compute to build models that have real product impact, in terms of evals, COGS reduction, as well as advancing the frontier when it comes to meeting enterprise needs and achieving the next set of research breakthroughs,” Nadella wrote.

The shake-up comes as pressure mounts on software companies to show a return on AI investments, as investors worry that the models could disrupt software incumbents.

The iShares Expanded Tech-Software Sector Exchange-Traded Fund is down about 19% so far this year, with Microsoft falling 17% in that period.

Microsoft is constructing models for generating source code, images and audio, and for reasoning, which produces answers that people can find more thoughtful but requires more time, Suleyman said.

At the same time, Microsoft will keep drawing on OpenAI intellectual property. In October, Microsoft said it has IP rights for OpenAI models and products through 2032.

“I’m genuinely thrilled about this change precisely because most of the future value is going to accrue to the model layer, and my job is to create highly COGS-optimized, highly efficient enterprise specific model lineages for Microsoft over the next three to five years,” Suleyman said in an interview, using the acronym for cost of goods sold. “That is singularly the objective, precisely because the model is the product, right? That is the future direction of all the IP.”

WATCH: Microsoft shifts from OpenAI exclusivity and expands its AI basket

Microsoft shakes up Copilot AI leadership team, freeing up Suleyman to build new models
Choose CNBC as your preferred source on Google and never miss a moment from the most trusted name in business news.


Nvidia, Amazon temporarily close Dubai offices, Google employees stranded amid U.S.-Iran war


A plume of smoke rises from the port of Jebel Ali following a reported Iranian strike in Dubai on March 1, 2026.

Fadel Senna | Afp | Getty Images

Nvidia, Amazon and Alphabet are among the big tech firms scrambling to ensure the safety of their employees who are traveling through or based in the Middle East after joint U.S.-Israel strikes on Iran over the weekend.

The massive attack on Iran killed Supreme Leader Ayatollah Ali Khamenei, among others, and Iran retaliated with strikes on Israeli and U.S. bases across the Gulf. The conflict has disrupted civilian life, internet access in Iran, flight routes and energy shipments across the region.

Chip tech leader Nvidia temporarily closed its Dubai offices, with employees there working remotely, according to an email reviewed by CNBC that was sent by CEO Jensen Huang to all employees early Tuesday.

Huang said in his memo that Nvidia’s crisis management team has been “working around the clock and actively supporting affected employees and their families” in the Middle East, including around 6,000 Nvidia employees based in Israel.

In 2019, Nvidia acquired Mellanox, an Israeli company that makes ethernet switches and other networking hardware, for around $7.13 billion, the largest deal in Nvidia’s history at that time. And today, outside of the U.S., Israel represents Nvidia’s largest research and development base.

As of Tuesday morning, all Nvidia employees impacted by the conflict and their immediate families were safe, Huang said.

“Nvidia has deep roots in the region,” Huang wrote. “Thousands of our colleagues live there, and many more across the globe have family and friends affected by these events. Like you, I am watching with great concern for the safety of our Nvidia families.”

Nvidia, Amazon temporarily close Dubai offices, Google employees stranded amid U.S.-Iran war

“Depart now”

The State Department said Monday that Americans should “depart now” from countries across the Middle East using available commercial transportation, citing “serious safety risks.” By Tuesday afternoon, the agency said it was working to secure military aircraft and charter flights to evacuate Americans from the region amid escalating instability.

The disruptions to air travel meant dozens of Google employees have been stranded in Dubai after a sales conference, according to sources, who asked not to be named in order to discuss sensitive matters.

The company’s cloud unit held its “Accelerate” sales kickoff in Dubai last week.

A memo was sent to some cloud employees on Sunday morning that noted it still has team members on the ground, adding that recent attacks are “concerning,” according to employees, who asked not to be named in order to speak about internal matters.

Though most employees got out of the region, dozens remain stuck there, the sources said.

Following the attack on Iran, airlines had mass cancellations. More than 11,000 Middle East flights have been cancelled since the U.S.-Israeli strikes over the weekend, according to aviation-data firm Cirium.

Google said the majority of impacted employees are not U.S.-based but in-region employees. It added that it has security and safety measures in place for its employees in the Middle East and has advised staff to follow guidance from local authorities.

“The situation in the Middle East is evolving rapidly and we are monitoring it carefully,” a Google spokesperson said in an emailed statement. “Our focus is on the safety and well-being of our employees in the region.”

Tech’s Middle East hubs

Dubai is a regional hub for Google’s cloud and sales operations across the Middle East and North Africa. Last year, Dubai’s Crown Prince Sheikh Hamdan bin Mohammed visited Google’s offices, exploring the company’s latest AI initiatives.

Tel Aviv, a central Israeli city that has been hit with strikes, is also a major hub for Google. The search giant is in the process of expanding into a massive new headquarters in the ToHa2 Tower, expected to be one of its largest global sites.

Google did not immediately respond to questions about how Tel Aviv-based operations and employees have been affected by the Iran conflict.

Amazon, which has grown its presence in the Middle East region in recent years, is also altering its operations there as it responds to the widening conflict in the region.

The company is instructing all of its corporate employees in the Middle East to work remotely and “follow local government guidelines.”

“The safety of our employees and partners remains our top priority, and we are working closely with local teams and local authorities to ensure they are supported,” an Amazon spokesperson said in a statement.

Amazon operates corporate offices in the United Arab Emirates, Saudi Arabia, Jordan, Bahrain, Kuwait, Egypt, Turkey and Israel. It also operates warehouses and data centers throughout the region, and “quick commerce outlets” in the UAE to fulfill 15-minute deliveries.

Its sprawling data center footprint became a flashpoint in the conflict on Sunday. Two data centers in the UAE were “directly struck” by drones, while a facility in Bahrain was also damaged by a nearby drone strike.

The facilities sustained structural damage, power disruptions and some water damage after firefighters worked to put out sparks and fire. The sites remain offline, and some Amazon Web Services applications, such as its popular virtual server and database services, have continued to experience issues.

AWS encouraged customers to back up their data or consider migrating workloads to other regions.

“Even as we work to restore these facilities, the ongoing conflict in the region means that the broader operating environment in the Middle East remains unpredictable,” AWS said.

Social media company Snap told CNBC that it’s asking employees at its four Middle East offices to work remotely until further notice.

The company said staffers are being advised to follow advice from local authorities regarding shelter-in-place orders and departure recommendations.

— CNBC’s Jonathan Vanian contributed to this report

WATCH: Iran has many more drones than originally expected

Iran has many more drones than originally expected, says MCC's Michelle Caruso-Cabrera


Mark Zuckerberg said he reached out to Apple CEO Tim Cook to discuss ‘wellbeing of teens and kids’


Mark Zuckerberg said he reached out to Apple CEO Tim Cook to discuss ‘wellbeing of teens and kids’

Meta CEO Mark Zuckerberg said in a Wednesday court testimony that he reached out to Apple CEO Tim Cook to discuss the “wellbeing of teens and kids.”

The comments came after the defense lawyer Paul Schmidt pointed to an email exchange between Zuckerberg and Cook from February 2018. “I thought there were opportunities that our company and Apple could be doing and I wanted to talk to Tim about that,” Zuckerberg said.

The email exchange was part of a broader portrayal by the defense attorney to show jury members that Zuckerberg was more proactive about the safety of young Instagram users than what was previously presented to court by the opposing counsel, going so far as to reach out to a corporate rival.

“I care about the wellbeing of teens and kids who are using our services,” Zuckerberg said when characterizing some of the content of the email.

Zuckerberg testified during a landmark trial in Los Angeles Superior Court over the question of social media and safety, which is being likened to the industry’s “Big Tobacco” moment.

Part of the trial focused on the alleged harms of certain digital filters promoting the cosmetic surgery, which Instagram chief Adam Mosseri previously testified about earlier in the trial.

Zuckerberg said that the company consulted with various stakeholders about the use of beauty filters on Instagram, but he did not specifically name them. The plaintiff’s lawyer questioned Zuckerberg about messages showing he lifted the ban because it was “paternalistic.”

“It sounds like something I would say and something I feel,” Zuckerberg replied. “It feels a little overbearing.”

Zuckerberg was pressed about the decision to allow the feature when the company had guidance from experts that the beauty filters had negative effects, particularly on young girls.

He was specifically asked about one study by the University of Chicago in which 18 experts said that beauty filters as a feature cause harm to teenage girls.

Zuckerberg, who noted that he believed this was referring to so-called cosmetic surgery filters, said he saw that feedback and discussed with the team, and it came down to free expression. “I genuinely want to err on the side of giving people the ability to express themselves,” Zuckerberg said.

Meta CEO Mark Zuckerberg arrives at Los Angeles Superior Court on Feb. 18, 2026.

Jill Connelly | Getty Images

Zuckerberg echoed Mosseri’s previous sentiments shared in court that Meta ultimately decided to lift a temporary ban on the plastic surgery digital filters without promoting them to other users.

Defense attorney Mark Lanier noted that Facebook vice president of product design and responsible innovation Margaret Stewart said in an email that while she would support Zuckerberg’s ultimate decision, she said she didn’t believe it was the “right call given the risks.” She mentioned in her message that she dealt with a personal family situation that she acknowledged made her biased, but gives her “first-hand knowledge” of the alleged harms.

Zuckerberg said that many Meta employees disagree with the company’s decisions, which is something the company encourages, and while he understood Stewart’s perspective, there was ultimately not enough causal evidence to support the assertion of harms by the outside experts.

When Lanier asked if Zuckerberg has a college degree that would indicate expertise in causation, the Meta chief said, “I don’t have a college degree in anything.”

“I agree i do not know the legal understanding of causation, but I think I have a pretty good idea of how statistics work,” Zuckerberg said.

The trial, which began in late January, centers on a young woman who alleged that she became addicted to social media and video streaming apps like Instagram and YouTube.

The Facebook founder pushed back against the notion that the social media company made increasing time spent on Instagram a company goal.

Zuckerberg was addressing a 2015 email thread in which he appeared to highlight improving engagement metrics as an urgent matter for the company.

While the email chain may have contained the words “company goals,” Zuckerberg said the comments could have been an aspiration, and asserted that Meta doesn’t have those objectives.

Lawyers later brought up evidence from Mosseri, which included goals to actively up user daily engagement time on the platform to 40 minutes in 2023 and to 46 minutes in 2026.

Zuckerberg said the company uses milestones internally to measure against competitors and “deliver the results we want to see.” He asserted that the company is building services to help people connect.

Meta CEO Mark Zuckerberg arrives at Los Angeles Superior Court ahead of the social media trial tasked to determine whether social media giants deliberately designed their platforms to be addictive to children, in Los Angeles, Feb. 18, 2026.

Frederic J. Brown | AFP | Getty Images

Lawyers also raised questions over whether the company has taken adequate steps to remove underage users from its platform.

Zuckerberg said during his testimony that some users lie about their age when signing up for Instagram, which requires users to be 13 or older. Lawyers also shared a document which stated that 4 million kids under 13 used the platform in the U.S.

The Facebook founder said that the company removes all underage users it identifies and includes terms about age usage during the sign-up process.

“You expect a 9-year-old to read all of the fine print,” a lawyer for the plaintiff questioned. “That’s your basis for swearing under oath that children under 13 are not allowed?”

Instagram did not begin requiring birthdays at sign-up until late 2019. At several times, Zuckerberg brought up his belief that age-verification is better suited for companies like Apple and Google, which maintain mobile operating systems and app stores.

Zuckerberg later responded to questions about documents in which the company reported a higher retention rate on its platform for users who join as tweens. He said lawyers were “mischaracterizing” his words and that Meta doesn’t always launch products in development such as an Instagram app for users under 13.

Meta Platforms CEO Mark Zuckerberg testifies at a Los Angeles Superior Court trial in a key test case accusing Meta and Google’s YouTube of harming kids’ mental health through addictive platforms, in Los Angeles, California, U.S., Feb. 18, 2026 in a courtroom sketch.

Mona Edwards | Reuters

During Wednesday’s session, Judge Carolyn B. Kuhl threatened to hold anyone using AI smart glasses during Zuckerberg’s testimony in contempt of court.

“If you have done that, you must delete that, or you will be held in contempt of the court,” the judge said. “This is very serious.”

Members of the team escorting Zuckerberg into the building just before noon ET were pictured wearing the Meta Ray-Ban artificial intelligence glasses.

Recording is not allowed in the courtroom.

Lawyers also questioned whether Zuckerberg previously lied about the board’s inability to fire him.

If the board wants to fire me, I could elect a new board and reinstate myself,” he said, in response to remarks he previously made on Joe Rogan’s podcast.

During his interview with the podcaster last year, Zuckerberg had said he wasn’t worried about losing his job because he holds voting power.

Zuckerberg told the courtroom he is “very bad” at media.

Lawyers representing the plaintiff contend that Meta, YouTube, TikTok and Snap misled the public about the safety of their services and knew that the design of their apps and certain features caused mental health harms to young users.

Snap and TikTok settled with the plaintiff involved in the case before the trial began.

Meta has denied the allegations and a spokesperson told CNBC in a statement that “the question for the jury in Los Angeles is whether Instagram was a substantial factor in the plaintiff’s mental health struggles.”

Last week, Instagram’s Mosseri testified that while he thinks there can be problematic usage of social media, he doesn’t believe that’s the same as clinical addiction.

Adam Mosseri, head of Instagram at Meta Platforms Inc., arrives at Los Angeles Superior Court in Los Angeles, California, US, on Wednesday, Feb. 11, 2026.

Caroline Brehman | Bloomberg | Getty Images

“So it’s a personal thing, but yeah, I do think it’s possible to use Instagram more than you feel good about,” Mosseri said. “Too much is relative, it’s personal.”

The Los Angeles trial is one of several major court cases taking place this year that experts have described as the social media industry’s “Big Tobacco” moment because of the alleged harm caused by their products and the related company efforts to deceive the public.

Parents of children who they allege suffered from detrimental effects of social media outside the courthouse in Los Angeles on Wednesday, Feb 18.

Jonathan Vanian

Meta is also involved in a major trial in New Mexico, in which the state’s attorney general, Raúl Torrez, alleges that the social media giant failed to ensure that children and young users are safe from online predators.

“What we are really alleging is that Meta has created a dangerous product, a product that enables not only the targeting of children, but the exploitation of children in virtual spaces and in the real world,” Torrez told CNBC’s “Squawk Box” last week when opening arguments for the trial began.

This summer, another social media trial is expected to begin in the Northern District of California. That trial also involves companies like Meta and YouTube and allegations that their respective apps contain flaws that foster detrimental mental health issues in young users.

CNBC’s Jennifer Elias contributed reporting.

WATCH: New Mexico AG Raul Torrez talks about his case against Meta

New Mexico AG Raul Torrez: Meta has created a space for predators to target and exploit children