Meta’s court losses spell potential trouble for AI research, consumer safety


Meta CEO Mark Zuckerberg leaves the Federal Courthouse in downtown Los Angeles after defending the company in a landmark social media addiction trial in Los Angeles, United States, on February 19, 2026.

Jon Putman | Anadolu | Getty Images

Over a decade ago, Meta – then known as Facebook – hired social science researchers to analyze how the social network’s services were affecting users. It was a way for the company and its peers to show they were serious about understanding the benefits and potential risks of their innovations. 

But as Meta’s court losses this week illustrate, the researchers’ work can become a liability. Brian Boland, a former Facebook executive who testified in both trials — one in New Mexico and the other in Los Angeles — says the damning findings from Meta’s internal research and documents seemed to contradict the way the company portrayed itself publicly. Juries in the two trials determined that Meta inadequately policed its site, putting kids in harm’s way. 

Mark Zuckerberg’s company began clamping down on its research teams a few years ago after a Facebook researcher, Frances Haugen, became a prominent whistleblower. The newer crop of tech companies, like OpenAI and Anthropic, subsequently invested heavily in researchers and charged them with studying the impact of modern AI on users and publishing their findings. 

With AI now getting outsized attention for the harmful effects it’s having on some users, those companies must ask if it’s in their best interest to continue funding research or to suppress it. 

“There was a period of time when there were teams that were created internally who could start to look at things and, for a brief window, you had some absolutely outstanding researchers who were looking at what was happening on these products with a little bit more free rein than I understand they have today,” Boland said in an interview.

Meta’s two defeats this week centered on different cases but they had a common theme: The company didn’t share what it knew about its products’ harms with the general public.

Meta’s court losses spell potential trouble for AI research, consumer safety

Jury members had to evaluate millions of corporate documents, including executive emails, presentations and internal research conducted by Meta’s staff. The documents included internal surveys appearing to show a concerning percentage of teenage users receiving unwanted sexual advances on Instagram. There was also research, which Meta eventually halted, implying that people who curbed their use of Facebook became less depressed and anxious.

Plaintiffs’ attorneys in the cases didn’t rely solely on internal research to make their arguments, but those studies helped bolster their positions about Meta’s alleged culpability. Meta’s defense teams argued that certain research was old, taken out of context and misleading, presenting a flawed view of how the company operates and how it views safety.

‘Both sides of the story’

Frances Haugen, former Facebook employee, speaks during a hearing of the Committee on Energy and Commerce Subcommittee on Communications and Technology on Capitol Hill December 1, 2021, in Washington, DC.

Brendan Smialowski | AFP | Getty Images

Haugen’s “disclosures were a significant turning point globally – not just for the companies themselves but for researchers, policymakers and the broader public,” said Kate Blocker, director of research and program at the nonprofit Children and Screens: Institute of Digital Media and Child Development.

The leaks also led to major changes at Meta and in the tech industry, which began to weed out research that could be viewed as counterproductive for the companies. Many teams studying alleged harms and related issues were cut, CNBC previously reported.

Some companies also began removing certain tools and features of their services that third-party researchers utilized to study their platforms.

 “Companies may now view ongoing research as a liability, but independent, third-party research must continue to be supported,” Blocker said.

Much of the internal research used in this week’s trials didn’t contain new revelations, and many of the documents had already been released by other whistleblowers, said Sacha Haworth, executive director of the Tech Oversight Project. What the trials added, Haworth said, were “the very emails, the very words, the very screenshots, the internal marketing presentations, the memos” that offered necessary context.

As the tech industry now pushes aggressively into AI, companies like Meta, OpenAI, and Google have been prioritizing products over research and safety. It’s a trend that concerns Blocker, who said that, “much like with social media before it, there is limited public visibility into what AI companies are studying about their products.”

“AI companies seem to be mostly studying the models themselves – model behavior, model interpretability, and alignment – but there is a significant gap in research regarding the impact of chatbots and digital assistants on child development,” Blocker said. “AI companies have a chance to not repeat the mistakes of the past – we urgently need to establish systems of transparency and access that share what these companies know about their platforms with the public and support further independent evaluation.”

WATCH: Regulatory pressure to follow after landmark social media verdict.

Regulatory pressure to follow after landmark social media verdict: Legal Analyst
Choose CNBC as your preferred source on Google and never miss a moment from the most trusted name in business news.


Anthropic wins preliminary injunction in DOD fight as judge cites ‘First Amendment retaliation’


CEO and co-founder of Anthropic Dario Amodei speak onstage during the 2025 New York Times Dealbook Summit at Jazz at Lincoln Center on December 03, 2025 in New York City.

Michael M. Santiago | Getty Images

A federal judge in San Francisco granted Anthropic’s request for a preliminary injunction in its lawsuit against the Trump administration. 

Judge Rita Lin issued the ruling on Thursday, two days after lawyers for the artificial intelligence startup and the U.S. government appeared in court for a hearing. Anthropic sued the administration to try to reverse its blacklisting by the Pentagon and President Donald Trump’s directive banning federal agencies from using its Claude models.

Anthropic sought the injunction to pause those actions and prevent further monetary and reputational harm as the case unfolds. The order bars the Trump administration from implementing, applying or enforcing the president’s directive, and hampers the Pentagon’s efforts to designate Anthropic as a threat to U.S. national security. 

“Punishing Anthropic for bringing public scrutiny to the government’s contracting position is classic illegal First Amendment retaliation,” Lin wrote in the order. A final verdict in the case could still be months away. 

During Tuesday’s hearing, Lin pressed the government’s lawyers about why Anthropic was blacklisted. Her language in Thursday’s order was even sharper.

“Nothing in the governing statute supports the Orwellian notion that an American company may be branded a potential adversary and saboteur of the U.S. for expressing disagreement with the government,” she wrote.

Following the ruling, Anthropic said it’s “grateful to the court for moving swiftly.”

“While this case was necessary to protect Anthropic, our customers, and our partners, our focus remains on working productively with the government to ensure all Americans benefit from safe, reliable AI,” the company said in a statement.  

Anthropic’s suit earlier this month followed a dramatic couple weeks in Washington D.C., between the Department of Defense and one of the most valuable private companies in the world.

In a post on X in late February, Defense Secretary Pete Hegseth declared Anthropic a so-called supply chain risk, meaning that use of the company’s technology purportedly threatens U.S. national security. In early March, the DOD officially notified Anthropic about the designation via a letter.

Anthropic is the first American company to publicly be named a supply chain risk, as the designation has historically been reserved for foreign adversaries. The label requires Defense contractors, including Amazon, Microsoft, and Palantir, to certify that they do not use Claude in their work with the military. 

The Trump administration relied on two distinct designations – 10 U.S.C. § 3252 and 41 U.S.C. § 4713 – to justify the action, and they have to be challenged in two separate courts. Because of that, Anthropic has filed another lawsuit for a formal review of the Defense Department’s determination in the U.S. Court of Appeals in Washington. 

Shortly before Hegseth declared Anthropic a supply chain risk, President Donald Trump wrote a Truth Social post ordering federal agencies to “immediately cease” all use of Anthropic’s technology. He said there would be a six-month phase-out period for agencies like the DOD.

“WE will decide the fate of our Country — NOT some out-of-control, Radical Left AI company run by people who have no idea what the real World is all about,” Trump wrote.

The Trump administration’s actions surprised many officials in Washington who had come to admire and rely on Anthropic’s technology. The company was the first to deploy its models across the DOD’s classified networks, and it was championed for its ability to integrate with existing Defense contractors like Palantir

Anthropic signed a $200 million contract with the Pentagon in July, but as the company began negotiating Claude’s deployment on the DOD’s GenAI.mil AI platform in September, talks stalled.

The DOD wanted Anthropic to grant the Pentagon unfettered access to its models across all lawful purposes, while Anthropic wanted assurance that its technology would not be used for fully autonomous weapons or domestic mass surveillance. 

The two failed to reach an agreement, and now, the dispute will be settled in court. 

“Everyone, including Anthropic, agrees that the Department of [Defense] is free to stop using Claude and look for a more permissive AI vendor,” Lin said during the hearing Tuesday. “I don’t see that as being what this case is about. I see the question in this case as being a very different one, which is whether the government violated the law.

WATCH: Anthropic vs. Pentagon hearing

Anthropic wins preliminary injunction in DOD fight as judge cites ‘First Amendment retaliation’
Choose CNBC as your preferred source on Google and never miss a moment from the most trusted name in business news.


Meet Figure AI: The company behind the humanoid robot hosted by Melania Trump


First lady of Sierra Leone Fatima Jabbe-Bio, Polish first lady Marta Nawrocka, French first lady Brigitte Macron, and U.S. first lady Melania Trump look at a humanoid robot during the Fostering the Future Together Global Coalition Summit in the East Room of the White House in Washington, DC, on March 25, 2026.

Oliver Contreras | Afp | Getty Images

The White House hosted its “first humanoid robot guest” on Wednesday, with first lady Melania Trump appearing alongside a robot from robotics upstart Figure AI.

The robot, identified as Figure 3, accompanied the first lady during the second day of the Fostering the Future Together Global Coalition Summit, a gathering focused on technology and children’s education. 

The machine greeted attendees in multiple languages and described itself as “a humanoid built in the United States of America,” according to widely circulated footage from the event.

The display represented one of, if not the, highest-profile showcases of humanoid robotics in the U.S. to date and highlights how the tech is becoming a national priority amid global tech competition. Beijing has also promoted humanoid robots at highly publicized events this year.

The first lady used the robot to promote her push for artificial intelligence in children’s education, suggesting that the robots could one day act as interactive educators at home. However, Figure AI says its third-generation humanoids are also applicable for more general purposes, including commercial and household tasks. 

The White House spotlight is likely to boost the brand of Nvidia-backed Figure AI, a lesser-known robot company compared to larger humanoid players like Tesla‘s Optimus and Boston Dynamics, though some of its team comes from those competitors, as well as tech giants like Apple.

A surging upstart 

Figure AI was founded in 2022 by Brett Adcock, a tech entrepreneur and billionaire who previously co-founded the publicly traded drone company Archer Aviation and a digital hiring marketplace Vettery. 

Powering its robots is the firm’s in-house Helix AI system, a vision-language-action model that powers its robots and enables learning through observation and verbal commands.

Amid growing investor excitement for physical AI, the firm raised more than $1 billion in its Series C funding round in September led by Parkway Venture Capital with participation from other notable investors such as Nvidia, Intel Capital, Qualcomm Ventures and Salesforce. That gave it a post-money valuation of $39 billion. 

The fundraising is expected to be put towards the firm’s aim to deploy thousands of robots in homes and logistics over the coming years — a goal that has likely been made easier by a major endorsement from the White House. 

Figure AI has already begun work with its first commercial customer in BMW, deploying its robots for tasks like handling sheet metal parts in manufacturing facilities.

Ongoing lawsuit

A tech figure across national priorities

Interestingly, the White House event on Wednesday wasn’t the first time that a company connected to Adcock received some major shine from the Trump administration. 

Shares of the drone company he co-founded, Archer Aviation, surged in June last year after U.S. President Donald Trump signed an Executive Order directing the establishment of a program to promote the safe integration of electric air taxis in U.S. cities.

Archer is participating in the initiative and is working on projects involving aircraft demonstrations. Following the June 2025 executive order, Archer raised $850 million in a registered direct stock offering. 

Adcock co-founded Archer Aviation in 2018 with Adam Goldstein and initially served as co-CEO. However, Adcock stepped down in April 2022, and then resigned from the company’s board of directors shortly afterward. 

He remains a shareholder, according to investment research platform Business Quant, but he has no active executive, board, or advisory position at the company. 

Choose CNBC as your preferred source on Google and never miss a moment from the most trusted name in business news.


Meta must pay $375 million for violating New Mexico law in child exploitation case, jury rules


A New Mexico state court jury on Tuesday held Meta liable for nearly $400 million in civil damages after a trial where the state attorney general accused the Facebook and Instagram operator of failing to safeguard kids who use its apps from child predators.

The civil trial, which began with opening arguments in Santa Fe last month, centered on allegations that Meta violated state consumer protections laws and misled residents about the safety of apps like Facebook and Instagram. New Mexico attorney general Raúl Torrez sued Meta in 2023 following an undercover operation involving the creation of a fake social media profile of a 13-year-old girl that he previously told CNBC “was simply inundated with images and targeted solicitations” from child abusers.

Deliberations began Monday, and jurors were tasked with ruling in favor or against the defendant Meta. Jury members found that Meta willfully violated the state’s unfair practices act, and decided the company should pay $375 million in damages based on the number of violations.

Linda Singer, an attorney representing New Mexico, urged jury members during closing statements to impose a civil penalty against Meta that could top $2 billion.

“We respectfully disagree with the verdict and will appeal,” a Meta spokesperson said. “We work hard to keep people safe on our platforms and are clear about the challenges of identifying and removing bad actors or harmful content. We will continue to defend ourselves vigorously, and we remain confident in our record of protecting teens online.”

Meta denied the state of New Mexico’s allegations and previously said that it is “focused on demonstrating our longstanding commitment to supporting young people.”

“The jury’s verdict is a historic victory for every child and family who has paid the price for Meta’s choice to put profits over kids’ safety,” Torrez said in a statement. “Meta executives knew their products harmed children, disregarded warnings from their own employees, and lied to the public about what they knew. Today the jury joined families, educators, and child safety experts in saying enough is enough.”

When the New Mexico trial’s second phase, conducted without a jury, commences on May 4, a judge will determine whether Meta created a public nuisance and should fund public programs intended to address the alleged harms. The state’s lawyers are also urging Meta to implement changes to its apps and operations, including “enacting effective age verification, removing predators from the platform, and protecting minors from encrypted communications that shield bad actors.”

During the trial, New Mexico prosecutors revealed legal filings detailing internal messages from Meta employees discussing how CEO Mark Zuckerberg’s 2019 announcement to make Facebook Messenger end-to-end encrypted by default would impact the ability to disclose to law enforcement some 7.5 million child sexual abuse material reports.

In an interview with CNBC on Tuesday before the verdict was revealed, Torrez discussed Meta’s argument that the prosecutors cherry picked certain materials to paint an unfair picture about the company, and that Meta has been updating its various apps with safety features.

Torrez said he didn’t think that the jury would “be convinced that they’ve done as much as they can or should have, and that they should be held responsible for it.”

“One of the things that I am really focused on is how we can change the design features of these products, at least within New Mexico, and that would create a standard that could then be modeled elsewhere in the country, and, frankly, around the world,” Torrez said during the sidelines of the Common Sense Summit held in San Francisco.

Torrez said that a similar child-exploitation related suit involving Snap, filed by his office in 2024, is still in the discovery stages and that his team was “able to overcome section 230 motions” in both the Meta and Snap case. The tech industry has argued that the Section 230 provision of the Communications Decency Act should prevent them from being held liable for content shared on their respective services, resulting in prosecutors testing new legal strategies focusing on the design of the apps instead.

Regarding Meta’s criticism that prosecutors are picking certain corporate documents and related materials, Torrez said, “What’s interesting is they accuse us of doing that, but all we’re doing is showing the world what they knew behind closed doors and weren’t willing to tell their users.”

The New Mexico case is one of multiple social media-related trials taking place this year that experts have compared to the Big Tobacco suits from the 1990s due in part to allegations that the companies misled the public about the safety and potential harms of their products.

Jury members in a separate, personal injury trial involving Meta and Google’s YouTube have been deliberating in a Los Angeles Superior court since last Friday. The companies are alleged to have misled the public about the safety and design of their respective apps. The jury must determine whether one or both of the companies implemented certain design features that contributed to the mental distress of a plaintiff who alleged that she became addicted to social media apps when she was underage.

A separate federal trial in the Northern District of California will commence later this year. Multiple school districts and parents across the nation allege that that the actions and apps of Meta, YouTube, TikTok and Snap caused negative mental health-related harms to teenagers and children.

WATCH: Would be surprised in Meta workforce cuts are as big as reported, says Evercore’s Mark Mahaney.


Pentagon ban of Anthropic faces judge; Claude AI maker seeks injunction


Dario Amodei, co-founder and chief executive officer of Anthropic, at the AI Impact Summit in New Delhi, India, on Thursday, Feb. 19, 2026.

Prakash Singh | Bloomberg | Getty Images

U.S. District Judge Rita Lin said Tuesday that the decision by the Pentagon to blacklist Anthropic’s Claude artificial intelligence models “looks like an attempt to cripple” the company.

Anthropic appeared in San Francisco federal court on Tuesday to ask Lin to temporarily pause the Pentagon’s blacklisting and President Donald Trump’s directive banning federal government agencies from using that technology.

If the preliminary injunction is awarded, the AI startup will be able to continue doing business with government contractors and federal agencies as its lawsuit against the Trump administration plays out in court.

Without the injunction, the company has said, it could lose billions of dollars in business.

Earlier in March, the Department of Defense designated Anthropic a so-called supply chain risk, meaning that use of the company’s technology purportedly threatens U.S. national security. It was the first time an American company had been hit with that designation.

The label, if allowed to continue, will require defense contractors, including Amazon, Microsoft, and Palantir, to certify that they do not use Claude in their work with the military.

“This is something that has never been done with respect to American company,” Anthropic’s counsel Michael Mongan said during the hearing. “It is a very narrow authority. It doesn’t apply here, and it’s not a normal way to respond to the concerns that have been articulated by the other side.”

Palantir is continuing to use Claude in its work with the department as the legal battle plays out, CEO Alex Karp told CNBC on March 12. Anthropic’s model is also being used in the war with Iran.

Anthropic has argued that there is no basis to consider the company a supply chain risk. The company also said it is being unfairly retaliated against because it demanded that the DOD not use Claude for fully autonomous weapons or mass surveillance of Americans. The Pentagon insists it does not use the AI models for such purposes.

Lin said she expects to issue an order on Anthropic’s motion in the next few days.

On Monday, the judge gave lawyers for Anthropic and the government a list of questions she wants answered at the hearing.

One of those questions was: “What evidence in the record shows that Anthropic had ongoing access to or control over Claude after delivering it to the government, such that Anthropic could engage in acts of sabotage or subversion?”

In its motion seeking a preliminary injunction, Anthropic argued that such an order would prevent the company from incurring further economic and reputational harm.

“The government has infringed on Anthropic’s right to speak freely; it has disparaged the company’s good name by stigmatizing it with an unlawful designation as a national security risk; it has deprived Anthropic of government contracts and damaged its relationships with business partners in the private sector; and it has put millions, possibly billions, of dollars at risk,” the motion stated. “Absent immediate relief from this Court, those harms will continue to mount.”

The company also noted that an injunction would not require the U.S. government to use its models or prevent it from transitioning to another AI vendor. 

Before the conflict erupted in late February, Anthropic was one of the first AI companies to partner with many U.S. agencies as the government sought to rapidly upgrade its systems and capabilities with cutting-edge AI tech.

Anthropic signed a $200 million contract with the Pentagon in July and was the first AI lab to deploy its technology across the agency’s classified networks.

But as the company began negotiating Claude’s deployment on the DOD’s GenAI.mil AI platform in September, talks stalled over how the military could use the models.

The department has insisted on unfettered access to the company’s technology for all lawful purposes. 

During the hearing on Tuesday, Lin questioned if the DOD was punishing Anthropic for “acting stubbornly” in negotiations. The government’s lawyer Eric Hamilton said that the company was going beyond the normal scope of a contractor.

“Anthropic is not just acting stubbornly. It’s not just refusing to agree to contracting terms. Instead, it’s raising concerns to [DOD] about how [DOD] uses its technology in military missions,” Hamilton said. “What happens if anthropic installs a kill switch or functionality that changes how it functions? That is an unacceptable risk to [DOD].”

In February after Anthropic and the DOD failed to reach an agreement, Trump issued a Truth Social post ordering federal agencies to “immediately cease” all use of Anthropic’s technology.

“WE will decide the fate of our Country — NOT some out-of-control, Radical Left AI company run by people who have no idea what the real World is all about,” Trump wrote.

WATCH: Anthropic sues Trump administration over Pentagon blacklisting

Pentagon ban of Anthropic faces judge; Claude AI maker seeks injunction
Choose CNBC as your preferred source on Google and never miss a moment from the most trusted name in business news.


Amazon faces further AWS disruption in the Middle East from Iran conflict


PARIS, FRANCE – JUNE 11: The Amazon Web Services (AWS) logo, a division of Amazon.com’s US e-commerce group is displayed during the 9th edition of the VivaTech show at Parc des Expositions Porte de Versailles on June 11, 2025 in Paris, France. VivaTech, the biggest tech show in Europe but also in a unique digital format, for 4 days of reconnection and relaunch thanks to innovation. The event brings together startups, CEOs, investors, tech leaders and all of the digital transformation players who are shaping the future of the Internet. The annual technology conference, also known as VivaTech, was founded in 2016 by Publicis Groupe and Groupe Les Echos and is dedicated to promoting innovation and startups. (Photo by Chesnot/Getty Images)

Chesnot | Getty Images Entertainment | Getty Images

Amazon Web Services said it was once again facing service disruptions in Bahrain on Monday, as a result of the ongoing conflict ‌in the Middle East.

“We are working closely with local authorities and prioritizing the safety of our personnel throughout our recovery efforts,” a spokesperson said in a statement shared with CNBC. 

AWS advised customers to migrate their applications to alternate AWS Regions, and said it had already helped a large number of users to do so. 

It comes after the cloud provider reported service disruption related to the Iran conflict in Bahrain and the UAE earlier in March.

In the UAE, two AWS facilities were directly struck by drones. In Bahrain, a drone strike landed in close proximity to company facilities and caused physical damage.

These previous AWS disruptions caused reported outages of apps and digital services in the UAE.

In recent weeks, Iran has continued to launch missile and drone strikes on its Middle East neighbors as part of its retaliation against Israel and the U.S.

Choose CNBC as your preferred source on Google and never miss a moment from the most trusted name in business news.


Trump threatens to deploy ICE agents to airports if DHS shutdown doesn’t end, while Elon Musk offers to cover TSA agents’ pay


U.S. President Donald Trump speaks to the media as he departs the White House for Florida, in Washington, D.C., U.S., March 20, 2026.

Nathan Howard | Reuters

President Donald Trump on ​Saturday ​threatened ​to send federal ⁠immigration agents ‌to U.S. ⁠airports unless congressional Democrats immediately ‌agree to fund the Department of Homeland Security.

“I will move our ⁠brilliant and ‌patriotic ‌ICE Agents to the Airports ⁠where they will ⁠do ⁠Security like no one ​has ‌ever seen before,” Trump wrote in ​a Truth Social post. The Trump administration has faced heavy criticism for aggressive deportation tactics by Immigration and Customs Enforcement and Border Patrol agents.

Trump claimed ICE agents handling airport security would arrest immigrants who are in the U.S. illegally, specifically targeting individuals from Somalia.

In a separate post later in the day, Trump said he plans to move ICE agents into airports as soon as Monday, telling them to “GET READY.”

“I look forward to moving ICE in on Monday, and have already told them to, ‘GET READY.’ NO MORE WAITING, NO MORE GAMES!” he wrote.

When asked for comment, the White House referred to Trump’s social media. DHS did not immediately respond to CNBC’s requests for comment.

A bipartisan group of senators met with DHS border czar Tom Homan last night to discuss additional immigration enforcement concessions made by the White House on Friday in an attempt to end the partial government shutdown, POLITICO reported, citing lawmakers in attendance.

The Senate is in session Saturday and Sunday, working on other legislative issues, but it is unclear whether further talks or a vote on the new DHS funding proposal will take place.

Read more CNBC politics coverage

Democrats are demanding changes to how federal immigration enforcement operates in exchange for releasing the funding. The White House and Democrats have been trading proposals for over a month but have not yet come to an agreement on a deal.

The DHS shutdown has been less disruptive than last year’s record-long government shutdown. But since much of DHS is considered essential, employees are required to work without pay.

The effects of the funding lapse and lack of pay are being felt at U.S. airports, where Transportation Security Administration agents are quitting or calling out sick. DHS employees missed their first full paychecks last week.

The shortage of agents has caused obscenely long lines at security checkpoints, including in Atlanta and Houston, where spring break travel is in full swing.

“If a deal ⁠isn’t ‌cut, you’re going to see what’s happening today ⁠look like child’s play,” Transportation Secretary Sean Duffy told CNN on Friday. Earlier in the week, Duffy warned that smaller airports could shut down entirely soon due to staffing.

Trump threatens to deploy ICE agents to airports if DHS shutdown doesn’t end, while Elon Musk offers to cover TSA agents’ pay

In a separate post earlier in the day, Tesla CEO and former Trump advisor Elon Musk said he would like to cover the paychecks of TSA ⁠officers as the shutdown continues.

“I would like to offer to pay the salaries of ‌TSA personnel during this funding impasse that is negatively affecting the lives of so many Americans at airports throughout ​the country,” Musk, the world’s richest man, said in a post on X.

Musk did not immediately respond to a request for comment.

The average salary for TSA agents is about $46,000 to $55,000, according to a recent Associated Press report.

It’s unclear how such an offer would work.

Last year, Trump announced a wealthy, unnamed donor provided $130 million to help cover military pay shortfalls caused by the administration’s first government shutdown, the longest in history. That mystery donor was revealed to be Timothy Mellon, an heir to a renowned Gilded Age banking family, The New York Times later reported.

But Mellon’s donation worked out to only about $100 per service member. It costs nearly $6.4 billion to pay U.S. troops every two weeks. And such a donation might have violated the Antideficiency Act, which bars federal agencies from spending funds that have not been appropriated by Congress, the Times reported.

Annie Nova and Dan Mangan contributed reporting

Choose CNBC as your preferred source on Google and never miss a moment from the most trusted name in business news.


Elon Musk misled Twitter investors ahead of $44 billion acquisition, jury says


Elon Musk arrives at federal court on March 4, 2026 in San Francisco, California.

Josh Edelson | Getty Images

A jury in California found that Elon Musk defrauded Twitter shareholders during the runup to his $44 billion acquisition of the social media company, according to a verdict issued on Friday.

Total damages could reach up to $2.6 billion, attorneys for the plaintiffs said.

The class action lawsuit, Pampena v. Musk, was originally filed in October 2022, after Musk completed his purchase of Twitter for $54.20 per share. He later renamed the company X, before merging it with his artificial intelligence company xAI, and then with SpaceX, his reusable rocket manufacturer.

“This is a great example of what you cannot do to the average investor — people that have 401ks, kids, pension funds, teachers, firemen, nurses,” Joseph Cotchett, an attorney for the Twitter investors, told CNBC at the San Francisco courthouse. “That’s what this case was all about. This was not about Musk. It was about the whole operation.”

In an emailed statement, Musk attorneys with Quinn Emanuel said, “We view today’s verdict, where the jury found both for and against the plaintiffs and found no fraud scheme, as a bump in the road. And we look forward to vindication on appeal.”

After Musk bid to buy Twitter in April 2022, his sentiment towards the deal quickly soured as he cast doubt on the company’s claimed level of bots, spam and fake accounts on its platform. Musk wrote in a tweet the following month that his acquisition was “temporarily on hold” until Twitter’s CEO could prove its inauthentic account levels were around the 5% reported in the company’s SEC filings.

Musk’s tweets and additional comments sent shares of Twitter sliding by almost 10% in a single session. The jury deliberated for four days and unanimously found that Musk’s tweets on May 13 and May 17 were materially false or misleading.

Former Twitter shareholders, including retail investors and options traders, argued that Musk’s remarks amounted to a scheme to pressure the company’s board to sell to him for a lower price than his original offer. They claimed he was motivated by stock price declines at Tesla, which would require him to sell even more shares in the automaker than he’d intended in order to finance the buyout.

The plaintiffs in the suit said they sold shares below $54.20 following and in response to Musk’s posts and comments during press interviews. The potential damages figure is based on expert estimates of how much Musk’s flip-flopping affected the share price during the class period.

Attorneys for the Twitter investors said it will be about 90 days before claims administration is set up, and it will then take a couple of months for the government to process claims and for investors to begin to recoup some of their losses.

Musk’s attorneys argued their client’s remarks were based on well-founded concerns about bots, spam and fake accounts on Twitter, and did not amount to securities fraud or a scheme to depress the company’s stock price.

The jury said that though Musk had made false and misleading statements that harmed some Twitter shareholders, he did not engage in a specific scheme to defraud investors.

While the verdict marks a stinging rebuke for Musk, the financial implications are minimal considering his net worth, which currently sits at about $650 billion, according to Bloomberg.

WATCH: Why Tesla is pivoting

Elon Musk misled Twitter investors ahead of  billion acquisition, jury says
Choose CNBC as your preferred source on Google and never miss a moment from the most trusted name in business news.


Micron revenue almost triples, tops estimates as demand for memory soars


Micron CEO Sanjay Mehrotra speaks at a groundbreaking ceremony for the company’s semiconductor manufacturing facility in Clay, New York, on Jan. 16, 2026.

Heather Ainsworth | Bloomberg | Getty Images

Micron’s revenue almost tripled in the latest quarter as results topped analysts’ estimates and guidance sailed past expectations. The stock, which is up more than 350% in the past year, slipped in extended trading.

Here’s how the company did relative to LSEG consensus:

  • Earnings per share: $12.20 adjusted vs. $9.31 expected
  • Revenue: $23.86 billion vs. $20.07 billion expected

Micron is benefiting from soaring demand for Nvidia graphics processing units that run generative artificial intelligence models. Each generation of Nvidia chip packs in more memory, creating a supply crunch. Micron has been working to add capacity, as have competitors Samsung and SK Hynix.

Revenue in the fiscal second quarter increased from $8.05 billion a year earlier, according to a statement.

For the current period, the company expects about $33.5 billion in revenue, up from $9.3 billion a year ago, implying growth of over 200%. Adjusted earnings per share will be about $19.15, Micron said. Analysts polled by LSEG had expected $12.05 in adjusted earnings per share on $24.3 billion in revenue.

“The step-up in our results and outlook are the outcome of an increase in memory demand driven by AI, structural supply constraints and Micron’s strong execution across the board,” CEO Sanjay Mehrotra said in prepared remarks the company issued at the time of the release.

Micron’s stock has been on a tear. The shares tripled in 2025 and have jumped another 62% year to date as of Wednesday’s close. Among the 10 most valuable U.S. tech companies, Micron is the only one that’s up. Oracle is the leading decliner, down 22%, and Microsoft and Tesla have also seen double-digit percentage drops.

“Looking at how the shares were trading going into this earnings report, I thought the biggest risk was high investor expectations,” said Hendi Susanto, a portfolio manager at Gabelli Funds, in an email. “However, fiscal third-quarter guidance is strong, well above analysts’ and my own expectations.”

Micron revenue almost triples, tops estimates as demand for memory soars

Mehrotra said that AI and conventional servers are facing a “lack of adequate DRAM and NAND supply.” That refers to the company’s traditional memory products that have long been used in data centers and devices.

Memory companies have been shifting production capacity largely to high-bandwidth memory, which is embedded onto Nvidia’s latest GPUs and many other chips powering AI. Those products have higher margins.

The company’s GAAP gross margin, the profit left after accounting for the cost of goods sold, more than doubled in the past year to 74.4% from 36.8%, and increased from 56% in the prior quarter.

Net income climbed to $13.8 billion, or $12.07 per share, from $1.58 billion, or $1.41 per share, in the same quarter last year.

Micron said revenue in its cloud memory business rose more than 160% to $7.75 billion. The mobile and client unit saw even steeper growth, with revenue jumping to $7.71 billion from $2.24 billion a year ago.

Memory is typically a commodity business, which comes with lower margins than other silicon products and short-term contracts. In the past few months, memory companies have signed longer-term contracts as semiconductor makers work to ensure future capacity.

“As AI evolves, we expect compute architectures to become more memory-intensive,” the company said in an earnings presentation. “This is why we strongly believe that Micron is one of the biggest beneficiaries and enablers of AI.”

Mehrotra said on the earnings call that volume production of HBM4 for Nvidia’s Vera Rubin started in the fiscal first quarter, and next-generation HBM4e products will ramp in 2027. Nvidia has said it will utilize custom HBM in its next-generation Feynman GPU coming in 2028.

Mehrotra added that capital expenditures will “step up meaningfully” in fiscal 2027, with construction-related costs increasing by over $10 billion.

Micron is building two giant new campuses of fabrication plants in Idaho and New York to increase its memory manufacturing capacity in the U.S. Mehrotra said on the call that initial production at the Idaho site is expected by mid-2027. Micron broke ground in January on the massive $100 billion New York campus, and expects wafer output by the second half of 2028.

WATCH: How Micron is building the biggest-ever U.S. chip fab, despite China ban

Micron is building the biggest-ever U.S. chip fab, despite China ban
Choose CNBC as your preferred source on Google and never miss a moment from the most trusted name in business news.


Microsoft shakes up Copilot AI leadership team, freeing up Suleyman to build new models


Microsoft AI CEO Mustafa Suleyman speaks during an event highlighting Microsoft Copilot, the company’s AI tool, on April 4, 2025 in Redmond, Washington. The company also celebrated its 50th anniversary.

Stephen Brashear | Getty Images News | Getty Images

Microsoft said Tuesday that it’s bringing together the engineering groups for its commercial and consumer Copilot assistants, which have yet to gain broad adoption.

Jacob Andreou, a former Snap executive who works in Microsoft’s artificial intelligence unit, will become an executive vice president in charge of the consumer and commercial Copilot experience, CEO Satya Nadella wrote in a memo to employees.

Andreou will report to Nadella. Executives Ryan Roslansky, Perry Clarke and Charles Lamanna, who will also report to Nadella, will lead Microsoft 365 applications and the Copilot platform, Nadella wrote.

The Copilot moves will free up executive Mustafa Suleyman, a former co-founder of AI lab DeepMind that Google bought in 2014, to focus more on building new models.

“The next phase of this plan is to restructure our organization to enable me to focus all my energy on our Superintelligence efforts and be able to deliver world class models for Microsoft over the next 5 years,” Suleyman wrote in a memo. “These models will enable us to build enterprise tuned lineages that help improve all our products across the company.”

Since arriving at Microsoft through the Inflection deal in 2024, Suleyman has spent time working on Copilot for consumers, among other initiatives.

Microsoft’s Copilot app had 6 million daily active users in February, while OpenAI’s ChatGPT had 440 million and Google’s Gemini had 82 million, according to data from app analytics company Sensor Tower.

Sensor Tower said that so far in March, Anthropic’s Claude, which has gotten extensive media attention because of Anthropic’s standoff with the U.S. Department of Defense, has reached 9 million daily users, while Copilot still stands at 6 million.

Microsoft incorporates generative AI models from Anthropic and OpenAI. About 3% of commercial users with Office productivity software subscriptions have access to the Microsoft 365 Copilot add-on. Google is pushing Gemini to both consumers and corporations.

In November, Microsoft announced the formation of a superintelligence group under Suleyman, who said Tuesday that frontier model development has always been his main focus and passion.

He said he will “stay directly involved in much of the day-to-day operation” of the broad Microsoft AI group that includes products such as the Bing search engine.

Google controlled 90% of search engine market share in February, while Bing had about 5%, according to estimates from web analytics company StatCounter.

“We are doubling down on our superintelligence mission with the talent and compute to build models that have real product impact, in terms of evals, COGS reduction, as well as advancing the frontier when it comes to meeting enterprise needs and achieving the next set of research breakthroughs,” Nadella wrote.

The shake-up comes as pressure mounts on software companies to show a return on AI investments, as investors worry that the models could disrupt software incumbents.

The iShares Expanded Tech-Software Sector Exchange-Traded Fund is down about 19% so far this year, with Microsoft falling 17% in that period.

Microsoft is constructing models for generating source code, images and audio, and for reasoning, which produces answers that people can find more thoughtful but requires more time, Suleyman said.

At the same time, Microsoft will keep drawing on OpenAI intellectual property. In October, Microsoft said it has IP rights for OpenAI models and products through 2032.

“I’m genuinely thrilled about this change precisely because most of the future value is going to accrue to the model layer, and my job is to create highly COGS-optimized, highly efficient enterprise specific model lineages for Microsoft over the next three to five years,” Suleyman said in an interview, using the acronym for cost of goods sold. “That is singularly the objective, precisely because the model is the product, right? That is the future direction of all the IP.”

WATCH: Microsoft shifts from OpenAI exclusivity and expands its AI basket

Microsoft shakes up Copilot AI leadership team, freeing up Suleyman to build new models
Choose CNBC as your preferred source on Google and never miss a moment from the most trusted name in business news.