Meta’s long-awaited AI model is finally here. But can it make money?


Mark Zuckerberg, chief executive officer of Meta Platforms Inc., wears a pair of Meta Oakley Vanguard AI glasses during the Meta Connect event in Menlo Park, California, US, on Wednesday, Sept. 17, 2025.

David Paul Morris | Bloomberg | Getty Images

Almost 10 months after Meta spent billions of dollars to bring in Scale AI’s Alexandr Wang as the centerpiece of Mark Zuckerberg’s AI overhaul, the company finally revealed its first new model on Wednesday. One big question is — will users pay for it?

While rivals like OpenAI, Anthropic and Google have spearheaded the artificial intelligence boom with powerful models and popular chatbots as well as other services, Meta has been a hefty spender on AI but has yet to show any new revenue streams from it.

In June, Meta shelled out more than $14 billion to hire Wang and some of his top engineers and researchers, soon creating Meta Superintelligence Labs as a new elite unit. And in January, the company told Wall Street it plans to pour between $115 billion and $135 billion this year into capital expenditures, nearly double its 2025 capex figure.

“It’s been a year of basically no releases and a lot of hiring, and then the capex worries for this year are pronounced,” said Morningstar analyst Malik Ahmed Khan, in an interview. “I think Meta had to show investors and operators they have been working on something of substance. That’s the first step.”

Meta’s second step, Khan said, is making the model work and figuring out how to monetize it.

Muse Spark, Meta’s newly released model, is proprietary, a sharp change from its predecessor family of models called Llama, which consisted of open-source offerings, though the company said it does plan to eventually release some open-source versions. Zuckerberg shook up his company’s strategy after the April release of Llama 4, which failed to captivate developers.

Alexandr Wang speaks on CNBC’s “Squawk Box” outside the World Economic Forum in Davos, Switzerland, on Jan. 23, 2025.

Gerry Miller | CNBC

Arun Chandrasekaran, an analyst at Gartner, described the move as a “major shift” and said it “signals an intention to move away” from the Llama brand.

Taking a cue from other frontier AI labs, Meta aims to eventually offer third parties paid API access to Muse Spark after an initial “private API preview” with “select parties.”

But Meta is very late to the game. OpenAI and Anthropic are collectively valued at well over $1 trillion, thanks to the popularity of their models and services, and Google has embedded Gemini across its portfolio of apps and products, while also selling access to the Gemini models via its cloud unit.

Meta’s AI technology, to succeed, has to be good enough to compete with top models while also providing a novel business opportunity.

‘Crown jewel’

Andrew Boone, an analyst at Citizens, said Meta’s clear advantage is the more than 3 billion people who use Facebook, Instagram and WhatsApp every month. And the business opportunity for Meta has nothing to do with trying to attract developers, who currently swarm to OpenAI, Anthropic, Gemini and a host of Chinese models, but rather to focus on its core market: advertising.

“That’s the crown jewel, that’s what needs to continue to improve,” said Boone, who recommends buying the stock.

Khan shares that sentiment.

“I believe that would be the killer use case from Meta’s perspective,” Khan said, with the goal being to “make ads more engaging and improve targeting.”

Advertising accounted for 98% of Meta’s $200 billion in revenue last year. The company has made numerous efforts to diversify its business, most notably spending tens of billions of dollars to try to make the metaverse happen. But Meta’s ad model is the one thing that’s consistently worked, and the company’s investments in AI have served to improve its targeting capabilities and provide better tools for marketers.

Khan said that as advertisers see returns on investment from their Meta spending, they reinvest that money back into more ads on the platform. So it makes sense that they’d be willing to pay for AI services if they can get even better results.

Meta declined to comment about its API plans beyond its initial announcement.

Meta’s long-awaited AI model is finally here. But can it make money?

Based on the technical benchmarks Meta released comparing Muse Spark to rivals, the new AI model appears to excel in areas related to image and video processing, said Doris Xin, CEO of AI startup Disarray. Those are important characteristics for advertisers seeking to make dynamic campaigns for an audience that’s grown accustomed to viewing short-form videos on Reels or gawking at cat photos on Facebook and Instagram.

“Compared to like Claude and Gemini, I think it definitely feels like it has more of a consumer bent,” Xin said about Muse Spark.

Zuckerberg, however, has long had ambitions that go well beyond advertising. His approach with Llama was targeted at developers and getting the best and brightest minds in AI using Meta’s tools even if they weren’t paying for them.

With the switch to proprietary models, the pitch to developers becomes more difficult. Joseph Ott, CEO of AI startup Samu Legal Technologies, said he’s unsure about where he would find value.

“The only reason I would use Llama is that I could fine-tune it,” Ott said, referring to the practice of customizing AI models.

Many developers use so-called open-weight AI models, like those provided by Chinese tech companies, as a basis to train AI models to meet their specific use cases. Ott said it’s unclear what would make Meta’s Muse Spark stand out against free or cheaper alternatives and the leading proprietary AI models.

Ulrik Stig Hansen, co-founder of AI and data training startup Encord, said it’s important for Meta to develop its own AI foundation models to avoid any future dependencies on third parties. As one of the few companies with the resources and computing infrastructure necessary to create and maintain big AI models, Meta wants to ensure that it remains relevant in the hottest market on the planet.

“It is about AI sovereignty and being a player in the game,” Hansen said. “They want to be perceived and known as an AI company.”

As for Meta’s massive investment in Wang and his team, Boone said the latest benchmarks suggest that Zuckerberg got what he wanted, and now it’s “back on Mark.”

“We just gave you a state-of-the-art frontier model,” Boone said, referring to the team behind Muse Spark. “What are you going to do with it?”

WATCH: Meta unveils its new AI model: “Muse Spark.”

Meta unveils Muse Spark AI model to rival top chatbots

Correction: Advertising accounted for 98% of Meta’s $200 billion in revenue last year. An earlier version mischaracterized the figure.

Choose CNBC as your preferred source on Google and never miss a moment from the most trusted name in business news.


Meta debuts new AI model, attempting to catch Google, OpenAI after spending billions


Meta is debuting its first major artificial intelligence model since the costly hiring of Scale AI’s Alexandr Wang nine months ago, as the Facebook parent aims to carve out a niche in a market that’s being dominated by OpenAI, Anthropic and Google.

Dubbed Muse Spark and originally codenamed Avocado, the AI model announced Wednesday is the first from the company’s new Muse series developed by Meta Superintelligence Labs, the AI unit that Wang oversees. Wang joined Meta in June as part of the company’s $14.3 billion investment in Scale AI, where he was CEO.

Meta is desperate to regain momentum in the fiercely competitive AI market following the disappointing debut of its latest open-source models last April. The release failed to captivate developers, leading CEO Mark Zuckerberg to pivot his strategy.

“Over the last nine months, Meta Superintelligence Labs rebuilt our AI stack from the ground up, moving faster than any development cycle we have run before,” Meta said in a blog post on Wednesday. “This initial model is small and fast by design, yet capable enough to reason through complex questions in science, math, and health. It is a powerful foundation, and the next generation is already in development.”

Meta isn’t positioning Muse Spark as a top-of-the-line model, but is instead highlighting its efficiency and “competitive performance” on various tasks.

The new Muse Spark will be proprietary, instead of open source, with the company saying there is “hope to open-source future versions of the model.” The company had been taking an open-source approach to AI with its Llama family of models.

Meta said in a technical blog about the new model that that improved AI training techniques along with rebuilt technology infrastructure has enabled the company to create smaller AI models that are as capable as its older midsize Llama 4 variant for “an order of magnitude less compute.”

“Muse Spark offers competitive performance in multimodal perception, reasoning, health, and agentic tasks,” Meta said in the post. “We continue to invest in areas with current performance gaps, specifically long-horizon agentic systems and coding workflows.”

Meta is also experimenting with a new AI model revenue stream by offering third-party developers access to Muse Spark’s underlying technology via an API. Currently, only unspecified “select partners” can access the AI model’s “private API preview,” but Meta said that it plans to eventually offer paid API access to a wider audience at a later date.

The new model now powers the company’s digital assistant in the standalone Meta AI app and desktop website. Muse Spark will debut in the coming weeks inside Facebook, Instagram, WhatsApp and Messenger, as well as in the company’s Ray-Ban Meta AI glasses. Meta also plans for Muse Spark to eventually power the company’s Vibes AI video feature in the Meta AI app. That service currently uses AI models from third-parties like Black Forest Labs.

With Muse Spark, users of the standalone Meta AI app and related website will now be able to alternate between certain modes depending on the sophistication of their prompts. Users can get use one mode quick answers to simple questions, and another for more complicated queries related to tasks like analyzing legal documents or gleaning nutritional information from photos of grocery store products.

With Muse Spark, users of the standalone Meta AI app and related website will now be able to alternate between certain modes depending on the sophistication of their prompts. With Instant mode, users can get quick answers to simple questions whereas Thinking mode lets them input more complicated queries related to tasks like analyzing legal documents or gleaning nutritional information from photos of grocery store products.

Additionally, a Contemplating mode “will be rolling out gradually” in the Meta AI app and site for the most complicated queries and tasks, Meta said in the technical blog. In this mode, the Muse Spark model utilizes a squad of AI agents to help “reason in parallel,” thus helping it “compete with the extreme reasoning modes of frontier models such as Gemini Deep Think and GPT Pro,” the technical blog said.

The revamped Meta AI with Muse Spark will also contain a Shopping mode that the company said will be able to help people buy clothes or decorate rooms.

“Shopping mode draws from the styling inspiration and brand storytelling already happening across our apps, surfacing ideas from the creators and communities people already follow,” Meta’s blog post said.

This is breaking news. Please check back for updates.

WATCH: Alphabet, Meta, Microsoft all down as data center spending rises.


Meta’s court losses spell potential trouble for AI research, consumer safety


Meta CEO Mark Zuckerberg leaves the Federal Courthouse in downtown Los Angeles after defending the company in a landmark social media addiction trial in Los Angeles, United States, on February 19, 2026.

Jon Putman | Anadolu | Getty Images

Over a decade ago, Meta – then known as Facebook – hired social science researchers to analyze how the social network’s services were affecting users. It was a way for the company and its peers to show they were serious about understanding the benefits and potential risks of their innovations. 

But as Meta’s court losses this week illustrate, the researchers’ work can become a liability. Brian Boland, a former Facebook executive who testified in both trials — one in New Mexico and the other in Los Angeles — says the damning findings from Meta’s internal research and documents seemed to contradict the way the company portrayed itself publicly. Juries in the two trials determined that Meta inadequately policed its site, putting kids in harm’s way. 

Mark Zuckerberg’s company began clamping down on its research teams a few years ago after a Facebook researcher, Frances Haugen, became a prominent whistleblower. The newer crop of tech companies, like OpenAI and Anthropic, subsequently invested heavily in researchers and charged them with studying the impact of modern AI on users and publishing their findings. 

With AI now getting outsized attention for the harmful effects it’s having on some users, those companies must ask if it’s in their best interest to continue funding research or to suppress it. 

“There was a period of time when there were teams that were created internally who could start to look at things and, for a brief window, you had some absolutely outstanding researchers who were looking at what was happening on these products with a little bit more free rein than I understand they have today,” Boland said in an interview.

Meta’s two defeats this week centered on different cases but they had a common theme: The company didn’t share what it knew about its products’ harms with the general public.

Meta’s court losses spell potential trouble for AI research, consumer safety

Jury members had to evaluate millions of corporate documents, including executive emails, presentations and internal research conducted by Meta’s staff. The documents included internal surveys appearing to show a concerning percentage of teenage users receiving unwanted sexual advances on Instagram. There was also research, which Meta eventually halted, implying that people who curbed their use of Facebook became less depressed and anxious.

Plaintiffs’ attorneys in the cases didn’t rely solely on internal research to make their arguments, but those studies helped bolster their positions about Meta’s alleged culpability. Meta’s defense teams argued that certain research was old, taken out of context and misleading, presenting a flawed view of how the company operates and how it views safety.

‘Both sides of the story’

Frances Haugen, former Facebook employee, speaks during a hearing of the Committee on Energy and Commerce Subcommittee on Communications and Technology on Capitol Hill December 1, 2021, in Washington, DC.

Brendan Smialowski | AFP | Getty Images

Haugen’s “disclosures were a significant turning point globally – not just for the companies themselves but for researchers, policymakers and the broader public,” said Kate Blocker, director of research and program at the nonprofit Children and Screens: Institute of Digital Media and Child Development.

The leaks also led to major changes at Meta and in the tech industry, which began to weed out research that could be viewed as counterproductive for the companies. Many teams studying alleged harms and related issues were cut, CNBC previously reported.

Some companies also began removing certain tools and features of their services that third-party researchers utilized to study their platforms.

 “Companies may now view ongoing research as a liability, but independent, third-party research must continue to be supported,” Blocker said.

Much of the internal research used in this week’s trials didn’t contain new revelations, and many of the documents had already been released by other whistleblowers, said Sacha Haworth, executive director of the Tech Oversight Project. What the trials added, Haworth said, were “the very emails, the very words, the very screenshots, the internal marketing presentations, the memos” that offered necessary context.

As the tech industry now pushes aggressively into AI, companies like Meta, OpenAI, and Google have been prioritizing products over research and safety. It’s a trend that concerns Blocker, who said that, “much like with social media before it, there is limited public visibility into what AI companies are studying about their products.”

“AI companies seem to be mostly studying the models themselves – model behavior, model interpretability, and alignment – but there is a significant gap in research regarding the impact of chatbots and digital assistants on child development,” Blocker said. “AI companies have a chance to not repeat the mistakes of the past – we urgently need to establish systems of transparency and access that share what these companies know about their platforms with the public and support further independent evaluation.”

WATCH: Regulatory pressure to follow after landmark social media verdict.

Regulatory pressure to follow after landmark social media verdict: Legal Analyst
Choose CNBC as your preferred source on Google and never miss a moment from the most trusted name in business news.


Meta must pay $375 million for violating New Mexico law in child exploitation case, jury rules


A New Mexico state court jury on Tuesday held Meta liable for nearly $400 million in civil damages after a trial where the state attorney general accused the Facebook and Instagram operator of failing to safeguard kids who use its apps from child predators.

The civil trial, which began with opening arguments in Santa Fe last month, centered on allegations that Meta violated state consumer protections laws and misled residents about the safety of apps like Facebook and Instagram. New Mexico attorney general Raúl Torrez sued Meta in 2023 following an undercover operation involving the creation of a fake social media profile of a 13-year-old girl that he previously told CNBC “was simply inundated with images and targeted solicitations” from child abusers.

Deliberations began Monday, and jurors were tasked with ruling in favor or against the defendant Meta. Jury members found that Meta willfully violated the state’s unfair practices act, and decided the company should pay $375 million in damages based on the number of violations.

Linda Singer, an attorney representing New Mexico, urged jury members during closing statements to impose a civil penalty against Meta that could top $2 billion.

“We respectfully disagree with the verdict and will appeal,” a Meta spokesperson said. “We work hard to keep people safe on our platforms and are clear about the challenges of identifying and removing bad actors or harmful content. We will continue to defend ourselves vigorously, and we remain confident in our record of protecting teens online.”

Meta denied the state of New Mexico’s allegations and previously said that it is “focused on demonstrating our longstanding commitment to supporting young people.”

“The jury’s verdict is a historic victory for every child and family who has paid the price for Meta’s choice to put profits over kids’ safety,” Torrez said in a statement. “Meta executives knew their products harmed children, disregarded warnings from their own employees, and lied to the public about what they knew. Today the jury joined families, educators, and child safety experts in saying enough is enough.”

When the New Mexico trial’s second phase, conducted without a jury, commences on May 4, a judge will determine whether Meta created a public nuisance and should fund public programs intended to address the alleged harms. The state’s lawyers are also urging Meta to implement changes to its apps and operations, including “enacting effective age verification, removing predators from the platform, and protecting minors from encrypted communications that shield bad actors.”

During the trial, New Mexico prosecutors revealed legal filings detailing internal messages from Meta employees discussing how CEO Mark Zuckerberg’s 2019 announcement to make Facebook Messenger end-to-end encrypted by default would impact the ability to disclose to law enforcement some 7.5 million child sexual abuse material reports.

In an interview with CNBC on Tuesday before the verdict was revealed, Torrez discussed Meta’s argument that the prosecutors cherry picked certain materials to paint an unfair picture about the company, and that Meta has been updating its various apps with safety features.

Torrez said he didn’t think that the jury would “be convinced that they’ve done as much as they can or should have, and that they should be held responsible for it.”

“One of the things that I am really focused on is how we can change the design features of these products, at least within New Mexico, and that would create a standard that could then be modeled elsewhere in the country, and, frankly, around the world,” Torrez said during the sidelines of the Common Sense Summit held in San Francisco.

Torrez said that a similar child-exploitation related suit involving Snap, filed by his office in 2024, is still in the discovery stages and that his team was “able to overcome section 230 motions” in both the Meta and Snap case. The tech industry has argued that the Section 230 provision of the Communications Decency Act should prevent them from being held liable for content shared on their respective services, resulting in prosecutors testing new legal strategies focusing on the design of the apps instead.

Regarding Meta’s criticism that prosecutors are picking certain corporate documents and related materials, Torrez said, “What’s interesting is they accuse us of doing that, but all we’re doing is showing the world what they knew behind closed doors and weren’t willing to tell their users.”

The New Mexico case is one of multiple social media-related trials taking place this year that experts have compared to the Big Tobacco suits from the 1990s due in part to allegations that the companies misled the public about the safety and potential harms of their products.

Jury members in a separate, personal injury trial involving Meta and Google’s YouTube have been deliberating in a Los Angeles Superior court since last Friday. The companies are alleged to have misled the public about the safety and design of their respective apps. The jury must determine whether one or both of the companies implemented certain design features that contributed to the mental distress of a plaintiff who alleged that she became addicted to social media apps when she was underage.

A separate federal trial in the Northern District of California will commence later this year. Multiple school districts and parents across the nation allege that that the actions and apps of Meta, YouTube, TikTok and Snap caused negative mental health-related harms to teenagers and children.

WATCH: Would be surprised in Meta workforce cuts are as big as reported, says Evercore’s Mark Mahaney.


Mark Zuckerberg said he reached out to Apple CEO Tim Cook to discuss ‘wellbeing of teens and kids’


Mark Zuckerberg said he reached out to Apple CEO Tim Cook to discuss ‘wellbeing of teens and kids’

Meta CEO Mark Zuckerberg said in a Wednesday court testimony that he reached out to Apple CEO Tim Cook to discuss the “wellbeing of teens and kids.”

The comments came after the defense lawyer Paul Schmidt pointed to an email exchange between Zuckerberg and Cook from February 2018. “I thought there were opportunities that our company and Apple could be doing and I wanted to talk to Tim about that,” Zuckerberg said.

The email exchange was part of a broader portrayal by the defense attorney to show jury members that Zuckerberg was more proactive about the safety of young Instagram users than what was previously presented to court by the opposing counsel, going so far as to reach out to a corporate rival.

“I care about the wellbeing of teens and kids who are using our services,” Zuckerberg said when characterizing some of the content of the email.

Zuckerberg testified during a landmark trial in Los Angeles Superior Court over the question of social media and safety, which is being likened to the industry’s “Big Tobacco” moment.

Part of the trial focused on the alleged harms of certain digital filters promoting the cosmetic surgery, which Instagram chief Adam Mosseri previously testified about earlier in the trial.

Zuckerberg said that the company consulted with various stakeholders about the use of beauty filters on Instagram, but he did not specifically name them. The plaintiff’s lawyer questioned Zuckerberg about messages showing he lifted the ban because it was “paternalistic.”

“It sounds like something I would say and something I feel,” Zuckerberg replied. “It feels a little overbearing.”

Zuckerberg was pressed about the decision to allow the feature when the company had guidance from experts that the beauty filters had negative effects, particularly on young girls.

He was specifically asked about one study by the University of Chicago in which 18 experts said that beauty filters as a feature cause harm to teenage girls.

Zuckerberg, who noted that he believed this was referring to so-called cosmetic surgery filters, said he saw that feedback and discussed with the team, and it came down to free expression. “I genuinely want to err on the side of giving people the ability to express themselves,” Zuckerberg said.

Meta CEO Mark Zuckerberg arrives at Los Angeles Superior Court on Feb. 18, 2026.

Jill Connelly | Getty Images

Zuckerberg echoed Mosseri’s previous sentiments shared in court that Meta ultimately decided to lift a temporary ban on the plastic surgery digital filters without promoting them to other users.

Defense attorney Mark Lanier noted that Facebook vice president of product design and responsible innovation Margaret Stewart said in an email that while she would support Zuckerberg’s ultimate decision, she said she didn’t believe it was the “right call given the risks.” She mentioned in her message that she dealt with a personal family situation that she acknowledged made her biased, but gives her “first-hand knowledge” of the alleged harms.

Zuckerberg said that many Meta employees disagree with the company’s decisions, which is something the company encourages, and while he understood Stewart’s perspective, there was ultimately not enough causal evidence to support the assertion of harms by the outside experts.

When Lanier asked if Zuckerberg has a college degree that would indicate expertise in causation, the Meta chief said, “I don’t have a college degree in anything.”

“I agree i do not know the legal understanding of causation, but I think I have a pretty good idea of how statistics work,” Zuckerberg said.

The trial, which began in late January, centers on a young woman who alleged that she became addicted to social media and video streaming apps like Instagram and YouTube.

The Facebook founder pushed back against the notion that the social media company made increasing time spent on Instagram a company goal.

Zuckerberg was addressing a 2015 email thread in which he appeared to highlight improving engagement metrics as an urgent matter for the company.

While the email chain may have contained the words “company goals,” Zuckerberg said the comments could have been an aspiration, and asserted that Meta doesn’t have those objectives.

Lawyers later brought up evidence from Mosseri, which included goals to actively up user daily engagement time on the platform to 40 minutes in 2023 and to 46 minutes in 2026.

Zuckerberg said the company uses milestones internally to measure against competitors and “deliver the results we want to see.” He asserted that the company is building services to help people connect.

Meta CEO Mark Zuckerberg arrives at Los Angeles Superior Court ahead of the social media trial tasked to determine whether social media giants deliberately designed their platforms to be addictive to children, in Los Angeles, Feb. 18, 2026.

Frederic J. Brown | AFP | Getty Images

Lawyers also raised questions over whether the company has taken adequate steps to remove underage users from its platform.

Zuckerberg said during his testimony that some users lie about their age when signing up for Instagram, which requires users to be 13 or older. Lawyers also shared a document which stated that 4 million kids under 13 used the platform in the U.S.

The Facebook founder said that the company removes all underage users it identifies and includes terms about age usage during the sign-up process.

“You expect a 9-year-old to read all of the fine print,” a lawyer for the plaintiff questioned. “That’s your basis for swearing under oath that children under 13 are not allowed?”

Instagram did not begin requiring birthdays at sign-up until late 2019. At several times, Zuckerberg brought up his belief that age-verification is better suited for companies like Apple and Google, which maintain mobile operating systems and app stores.

Zuckerberg later responded to questions about documents in which the company reported a higher retention rate on its platform for users who join as tweens. He said lawyers were “mischaracterizing” his words and that Meta doesn’t always launch products in development such as an Instagram app for users under 13.

Meta Platforms CEO Mark Zuckerberg testifies at a Los Angeles Superior Court trial in a key test case accusing Meta and Google’s YouTube of harming kids’ mental health through addictive platforms, in Los Angeles, California, U.S., Feb. 18, 2026 in a courtroom sketch.

Mona Edwards | Reuters

During Wednesday’s session, Judge Carolyn B. Kuhl threatened to hold anyone using AI smart glasses during Zuckerberg’s testimony in contempt of court.

“If you have done that, you must delete that, or you will be held in contempt of the court,” the judge said. “This is very serious.”

Members of the team escorting Zuckerberg into the building just before noon ET were pictured wearing the Meta Ray-Ban artificial intelligence glasses.

Recording is not allowed in the courtroom.

Lawyers also questioned whether Zuckerberg previously lied about the board’s inability to fire him.

If the board wants to fire me, I could elect a new board and reinstate myself,” he said, in response to remarks he previously made on Joe Rogan’s podcast.

During his interview with the podcaster last year, Zuckerberg had said he wasn’t worried about losing his job because he holds voting power.

Zuckerberg told the courtroom he is “very bad” at media.

Lawyers representing the plaintiff contend that Meta, YouTube, TikTok and Snap misled the public about the safety of their services and knew that the design of their apps and certain features caused mental health harms to young users.

Snap and TikTok settled with the plaintiff involved in the case before the trial began.

Meta has denied the allegations and a spokesperson told CNBC in a statement that “the question for the jury in Los Angeles is whether Instagram was a substantial factor in the plaintiff’s mental health struggles.”

Last week, Instagram’s Mosseri testified that while he thinks there can be problematic usage of social media, he doesn’t believe that’s the same as clinical addiction.

Adam Mosseri, head of Instagram at Meta Platforms Inc., arrives at Los Angeles Superior Court in Los Angeles, California, US, on Wednesday, Feb. 11, 2026.

Caroline Brehman | Bloomberg | Getty Images

“So it’s a personal thing, but yeah, I do think it’s possible to use Instagram more than you feel good about,” Mosseri said. “Too much is relative, it’s personal.”

The Los Angeles trial is one of several major court cases taking place this year that experts have described as the social media industry’s “Big Tobacco” moment because of the alleged harm caused by their products and the related company efforts to deceive the public.

Parents of children who they allege suffered from detrimental effects of social media outside the courthouse in Los Angeles on Wednesday, Feb 18.

Jonathan Vanian

Meta is also involved in a major trial in New Mexico, in which the state’s attorney general, Raúl Torrez, alleges that the social media giant failed to ensure that children and young users are safe from online predators.

“What we are really alleging is that Meta has created a dangerous product, a product that enables not only the targeting of children, but the exploitation of children in virtual spaces and in the real world,” Torrez told CNBC’s “Squawk Box” last week when opening arguments for the trial began.

This summer, another social media trial is expected to begin in the Northern District of California. That trial also involves companies like Meta and YouTube and allegations that their respective apps contain flaws that foster detrimental mental health issues in young users.

CNBC’s Jennifer Elias contributed reporting.

WATCH: New Mexico AG Raul Torrez talks about his case against Meta

New Mexico AG Raul Torrez: Meta has created a space for predators to target and exploit children