Broadcom agrees to expanded chip deals with Google, Anthropic


Broadcom CEO Hock Tan speaks at the digital X event in Cologne, Germany, on September 13, 2022.

Ying Tang | Nurphoto | Getty Images

Broadcom said Monday that it’s agreed to produce future versions of artificial intelligence chips for Google, and signed an expanded deal with Anthropic that will give the AI startup access to about 3.5 gigawatts worth of computing capacity drawing on Google’s AI processors.

Shares of Broadcom rose 3% in extended trading.

The disclosure in a securities filing underscores the surging demand for infrastructure that can run generative AI models. Anthropic’s popularity has soared this year, with its Claude app becoming the top free U.S. app listed in Apple’s App Store in February after a dispute between the company and the Pentagon became public.

On an earnings call last month, Broadcom CEO Hock Tan said that “for Anthropic, we are off to a very good start in 2026” in providing 1 gigawatt of compute from Google’s homegrown tensor processing units (TPUs). Broadcom helps Google make its TPUs.

“For 2027, this demand is expected to surge in excess of 3 gigawatts of compute,” he said.

In a note following the earnings call, analysts at Mizuho led by Vijay Rakesh estimated that Broadcom would pick up $21 billion in AI revenue from Anthropic in 2026 and $42 billion in 2027. The filing on Monday did not contain a dollar amount.

Meanwhile, Broadcom is also collaborating with Anthropic rival OpenAI on custom silicon for AI. Both model builders currently rely heavily on graphics processing units from Nvidia through cloud providers such as Amazon, Google and Microsoft. OpenAI has also committed to drawing on six gigawatts of AMD’s GPUs, with the first gigawatt set to come in the second half of this year.

WATCH: Final Trades: Broadcom, Spotify, Applovin and Uber

Broadcom agrees to expanded chip deals with Google, Anthropic
Choose CNBC as your preferred source on Google and never miss a moment from the most trusted name in business news.


Meta’s court losses spell potential trouble for AI research, consumer safety


Meta CEO Mark Zuckerberg leaves the Federal Courthouse in downtown Los Angeles after defending the company in a landmark social media addiction trial in Los Angeles, United States, on February 19, 2026.

Jon Putman | Anadolu | Getty Images

Over a decade ago, Meta – then known as Facebook – hired social science researchers to analyze how the social network’s services were affecting users. It was a way for the company and its peers to show they were serious about understanding the benefits and potential risks of their innovations. 

But as Meta’s court losses this week illustrate, the researchers’ work can become a liability. Brian Boland, a former Facebook executive who testified in both trials — one in New Mexico and the other in Los Angeles — says the damning findings from Meta’s internal research and documents seemed to contradict the way the company portrayed itself publicly. Juries in the two trials determined that Meta inadequately policed its site, putting kids in harm’s way. 

Mark Zuckerberg’s company began clamping down on its research teams a few years ago after a Facebook researcher, Frances Haugen, became a prominent whistleblower. The newer crop of tech companies, like OpenAI and Anthropic, subsequently invested heavily in researchers and charged them with studying the impact of modern AI on users and publishing their findings. 

With AI now getting outsized attention for the harmful effects it’s having on some users, those companies must ask if it’s in their best interest to continue funding research or to suppress it. 

“There was a period of time when there were teams that were created internally who could start to look at things and, for a brief window, you had some absolutely outstanding researchers who were looking at what was happening on these products with a little bit more free rein than I understand they have today,” Boland said in an interview.

Meta’s two defeats this week centered on different cases but they had a common theme: The company didn’t share what it knew about its products’ harms with the general public.

Meta’s court losses spell potential trouble for AI research, consumer safety

Jury members had to evaluate millions of corporate documents, including executive emails, presentations and internal research conducted by Meta’s staff. The documents included internal surveys appearing to show a concerning percentage of teenage users receiving unwanted sexual advances on Instagram. There was also research, which Meta eventually halted, implying that people who curbed their use of Facebook became less depressed and anxious.

Plaintiffs’ attorneys in the cases didn’t rely solely on internal research to make their arguments, but those studies helped bolster their positions about Meta’s alleged culpability. Meta’s defense teams argued that certain research was old, taken out of context and misleading, presenting a flawed view of how the company operates and how it views safety.

‘Both sides of the story’

Frances Haugen, former Facebook employee, speaks during a hearing of the Committee on Energy and Commerce Subcommittee on Communications and Technology on Capitol Hill December 1, 2021, in Washington, DC.

Brendan Smialowski | AFP | Getty Images

Haugen’s “disclosures were a significant turning point globally – not just for the companies themselves but for researchers, policymakers and the broader public,” said Kate Blocker, director of research and program at the nonprofit Children and Screens: Institute of Digital Media and Child Development.

The leaks also led to major changes at Meta and in the tech industry, which began to weed out research that could be viewed as counterproductive for the companies. Many teams studying alleged harms and related issues were cut, CNBC previously reported.

Some companies also began removing certain tools and features of their services that third-party researchers utilized to study their platforms.

 “Companies may now view ongoing research as a liability, but independent, third-party research must continue to be supported,” Blocker said.

Much of the internal research used in this week’s trials didn’t contain new revelations, and many of the documents had already been released by other whistleblowers, said Sacha Haworth, executive director of the Tech Oversight Project. What the trials added, Haworth said, were “the very emails, the very words, the very screenshots, the internal marketing presentations, the memos” that offered necessary context.

As the tech industry now pushes aggressively into AI, companies like Meta, OpenAI, and Google have been prioritizing products over research and safety. It’s a trend that concerns Blocker, who said that, “much like with social media before it, there is limited public visibility into what AI companies are studying about their products.”

“AI companies seem to be mostly studying the models themselves – model behavior, model interpretability, and alignment – but there is a significant gap in research regarding the impact of chatbots and digital assistants on child development,” Blocker said. “AI companies have a chance to not repeat the mistakes of the past – we urgently need to establish systems of transparency and access that share what these companies know about their platforms with the public and support further independent evaluation.”

WATCH: Regulatory pressure to follow after landmark social media verdict.

Regulatory pressure to follow after landmark social media verdict: Legal Analyst
Choose CNBC as your preferred source on Google and never miss a moment from the most trusted name in business news.


Anthropic wins preliminary injunction in DOD fight as judge cites ‘First Amendment retaliation’


CEO and co-founder of Anthropic Dario Amodei speak onstage during the 2025 New York Times Dealbook Summit at Jazz at Lincoln Center on December 03, 2025 in New York City.

Michael M. Santiago | Getty Images

A federal judge in San Francisco granted Anthropic’s request for a preliminary injunction in its lawsuit against the Trump administration. 

Judge Rita Lin issued the ruling on Thursday, two days after lawyers for the artificial intelligence startup and the U.S. government appeared in court for a hearing. Anthropic sued the administration to try to reverse its blacklisting by the Pentagon and President Donald Trump’s directive banning federal agencies from using its Claude models.

Anthropic sought the injunction to pause those actions and prevent further monetary and reputational harm as the case unfolds. The order bars the Trump administration from implementing, applying or enforcing the president’s directive, and hampers the Pentagon’s efforts to designate Anthropic as a threat to U.S. national security. 

“Punishing Anthropic for bringing public scrutiny to the government’s contracting position is classic illegal First Amendment retaliation,” Lin wrote in the order. A final verdict in the case could still be months away. 

During Tuesday’s hearing, Lin pressed the government’s lawyers about why Anthropic was blacklisted. Her language in Thursday’s order was even sharper.

“Nothing in the governing statute supports the Orwellian notion that an American company may be branded a potential adversary and saboteur of the U.S. for expressing disagreement with the government,” she wrote.

Following the ruling, Anthropic said it’s “grateful to the court for moving swiftly.”

“While this case was necessary to protect Anthropic, our customers, and our partners, our focus remains on working productively with the government to ensure all Americans benefit from safe, reliable AI,” the company said in a statement.  

Anthropic’s suit earlier this month followed a dramatic couple weeks in Washington D.C., between the Department of Defense and one of the most valuable private companies in the world.

In a post on X in late February, Defense Secretary Pete Hegseth declared Anthropic a so-called supply chain risk, meaning that use of the company’s technology purportedly threatens U.S. national security. In early March, the DOD officially notified Anthropic about the designation via a letter.

Anthropic is the first American company to publicly be named a supply chain risk, as the designation has historically been reserved for foreign adversaries. The label requires Defense contractors, including Amazon, Microsoft, and Palantir, to certify that they do not use Claude in their work with the military. 

The Trump administration relied on two distinct designations – 10 U.S.C. § 3252 and 41 U.S.C. § 4713 – to justify the action, and they have to be challenged in two separate courts. Because of that, Anthropic has filed another lawsuit for a formal review of the Defense Department’s determination in the U.S. Court of Appeals in Washington. 

Shortly before Hegseth declared Anthropic a supply chain risk, President Donald Trump wrote a Truth Social post ordering federal agencies to “immediately cease” all use of Anthropic’s technology. He said there would be a six-month phase-out period for agencies like the DOD.

“WE will decide the fate of our Country — NOT some out-of-control, Radical Left AI company run by people who have no idea what the real World is all about,” Trump wrote.

The Trump administration’s actions surprised many officials in Washington who had come to admire and rely on Anthropic’s technology. The company was the first to deploy its models across the DOD’s classified networks, and it was championed for its ability to integrate with existing Defense contractors like Palantir

Anthropic signed a $200 million contract with the Pentagon in July, but as the company began negotiating Claude’s deployment on the DOD’s GenAI.mil AI platform in September, talks stalled.

The DOD wanted Anthropic to grant the Pentagon unfettered access to its models across all lawful purposes, while Anthropic wanted assurance that its technology would not be used for fully autonomous weapons or domestic mass surveillance. 

The two failed to reach an agreement, and now, the dispute will be settled in court. 

“Everyone, including Anthropic, agrees that the Department of [Defense] is free to stop using Claude and look for a more permissive AI vendor,” Lin said during the hearing Tuesday. “I don’t see that as being what this case is about. I see the question in this case as being a very different one, which is whether the government violated the law.

WATCH: Anthropic vs. Pentagon hearing

Anthropic wins preliminary injunction in DOD fight as judge cites ‘First Amendment retaliation’
Choose CNBC as your preferred source on Google and never miss a moment from the most trusted name in business news.


Meta must pay $375 million for violating New Mexico law in child exploitation case, jury rules


A New Mexico state court jury on Tuesday held Meta liable for nearly $400 million in civil damages after a trial where the state attorney general accused the Facebook and Instagram operator of failing to safeguard kids who use its apps from child predators.

The civil trial, which began with opening arguments in Santa Fe last month, centered on allegations that Meta violated state consumer protections laws and misled residents about the safety of apps like Facebook and Instagram. New Mexico attorney general Raúl Torrez sued Meta in 2023 following an undercover operation involving the creation of a fake social media profile of a 13-year-old girl that he previously told CNBC “was simply inundated with images and targeted solicitations” from child abusers.

Deliberations began Monday, and jurors were tasked with ruling in favor or against the defendant Meta. Jury members found that Meta willfully violated the state’s unfair practices act, and decided the company should pay $375 million in damages based on the number of violations.

Linda Singer, an attorney representing New Mexico, urged jury members during closing statements to impose a civil penalty against Meta that could top $2 billion.

“We respectfully disagree with the verdict and will appeal,” a Meta spokesperson said. “We work hard to keep people safe on our platforms and are clear about the challenges of identifying and removing bad actors or harmful content. We will continue to defend ourselves vigorously, and we remain confident in our record of protecting teens online.”

Meta denied the state of New Mexico’s allegations and previously said that it is “focused on demonstrating our longstanding commitment to supporting young people.”

“The jury’s verdict is a historic victory for every child and family who has paid the price for Meta’s choice to put profits over kids’ safety,” Torrez said in a statement. “Meta executives knew their products harmed children, disregarded warnings from their own employees, and lied to the public about what they knew. Today the jury joined families, educators, and child safety experts in saying enough is enough.”

When the New Mexico trial’s second phase, conducted without a jury, commences on May 4, a judge will determine whether Meta created a public nuisance and should fund public programs intended to address the alleged harms. The state’s lawyers are also urging Meta to implement changes to its apps and operations, including “enacting effective age verification, removing predators from the platform, and protecting minors from encrypted communications that shield bad actors.”

During the trial, New Mexico prosecutors revealed legal filings detailing internal messages from Meta employees discussing how CEO Mark Zuckerberg’s 2019 announcement to make Facebook Messenger end-to-end encrypted by default would impact the ability to disclose to law enforcement some 7.5 million child sexual abuse material reports.

In an interview with CNBC on Tuesday before the verdict was revealed, Torrez discussed Meta’s argument that the prosecutors cherry picked certain materials to paint an unfair picture about the company, and that Meta has been updating its various apps with safety features.

Torrez said he didn’t think that the jury would “be convinced that they’ve done as much as they can or should have, and that they should be held responsible for it.”

“One of the things that I am really focused on is how we can change the design features of these products, at least within New Mexico, and that would create a standard that could then be modeled elsewhere in the country, and, frankly, around the world,” Torrez said during the sidelines of the Common Sense Summit held in San Francisco.

Torrez said that a similar child-exploitation related suit involving Snap, filed by his office in 2024, is still in the discovery stages and that his team was “able to overcome section 230 motions” in both the Meta and Snap case. The tech industry has argued that the Section 230 provision of the Communications Decency Act should prevent them from being held liable for content shared on their respective services, resulting in prosecutors testing new legal strategies focusing on the design of the apps instead.

Regarding Meta’s criticism that prosecutors are picking certain corporate documents and related materials, Torrez said, “What’s interesting is they accuse us of doing that, but all we’re doing is showing the world what they knew behind closed doors and weren’t willing to tell their users.”

The New Mexico case is one of multiple social media-related trials taking place this year that experts have compared to the Big Tobacco suits from the 1990s due in part to allegations that the companies misled the public about the safety and potential harms of their products.

Jury members in a separate, personal injury trial involving Meta and Google’s YouTube have been deliberating in a Los Angeles Superior court since last Friday. The companies are alleged to have misled the public about the safety and design of their respective apps. The jury must determine whether one or both of the companies implemented certain design features that contributed to the mental distress of a plaintiff who alleged that she became addicted to social media apps when she was underage.

A separate federal trial in the Northern District of California will commence later this year. Multiple school districts and parents across the nation allege that that the actions and apps of Meta, YouTube, TikTok and Snap caused negative mental health-related harms to teenagers and children.

WATCH: Would be surprised in Meta workforce cuts are as big as reported, says Evercore’s Mark Mahaney.


Pentagon ban of Anthropic faces judge; Claude AI maker seeks injunction


Dario Amodei, co-founder and chief executive officer of Anthropic, at the AI Impact Summit in New Delhi, India, on Thursday, Feb. 19, 2026.

Prakash Singh | Bloomberg | Getty Images

U.S. District Judge Rita Lin said Tuesday that the decision by the Pentagon to blacklist Anthropic’s Claude artificial intelligence models “looks like an attempt to cripple” the company.

Anthropic appeared in San Francisco federal court on Tuesday to ask Lin to temporarily pause the Pentagon’s blacklisting and President Donald Trump’s directive banning federal government agencies from using that technology.

If the preliminary injunction is awarded, the AI startup will be able to continue doing business with government contractors and federal agencies as its lawsuit against the Trump administration plays out in court.

Without the injunction, the company has said, it could lose billions of dollars in business.

Earlier in March, the Department of Defense designated Anthropic a so-called supply chain risk, meaning that use of the company’s technology purportedly threatens U.S. national security. It was the first time an American company had been hit with that designation.

The label, if allowed to continue, will require defense contractors, including Amazon, Microsoft, and Palantir, to certify that they do not use Claude in their work with the military.

“This is something that has never been done with respect to American company,” Anthropic’s counsel Michael Mongan said during the hearing. “It is a very narrow authority. It doesn’t apply here, and it’s not a normal way to respond to the concerns that have been articulated by the other side.”

Palantir is continuing to use Claude in its work with the department as the legal battle plays out, CEO Alex Karp told CNBC on March 12. Anthropic’s model is also being used in the war with Iran.

Anthropic has argued that there is no basis to consider the company a supply chain risk. The company also said it is being unfairly retaliated against because it demanded that the DOD not use Claude for fully autonomous weapons or mass surveillance of Americans. The Pentagon insists it does not use the AI models for such purposes.

Lin said she expects to issue an order on Anthropic’s motion in the next few days.

On Monday, the judge gave lawyers for Anthropic and the government a list of questions she wants answered at the hearing.

One of those questions was: “What evidence in the record shows that Anthropic had ongoing access to or control over Claude after delivering it to the government, such that Anthropic could engage in acts of sabotage or subversion?”

In its motion seeking a preliminary injunction, Anthropic argued that such an order would prevent the company from incurring further economic and reputational harm.

“The government has infringed on Anthropic’s right to speak freely; it has disparaged the company’s good name by stigmatizing it with an unlawful designation as a national security risk; it has deprived Anthropic of government contracts and damaged its relationships with business partners in the private sector; and it has put millions, possibly billions, of dollars at risk,” the motion stated. “Absent immediate relief from this Court, those harms will continue to mount.”

The company also noted that an injunction would not require the U.S. government to use its models or prevent it from transitioning to another AI vendor. 

Before the conflict erupted in late February, Anthropic was one of the first AI companies to partner with many U.S. agencies as the government sought to rapidly upgrade its systems and capabilities with cutting-edge AI tech.

Anthropic signed a $200 million contract with the Pentagon in July and was the first AI lab to deploy its technology across the agency’s classified networks.

But as the company began negotiating Claude’s deployment on the DOD’s GenAI.mil AI platform in September, talks stalled over how the military could use the models.

The department has insisted on unfettered access to the company’s technology for all lawful purposes. 

During the hearing on Tuesday, Lin questioned if the DOD was punishing Anthropic for “acting stubbornly” in negotiations. The government’s lawyer Eric Hamilton said that the company was going beyond the normal scope of a contractor.

“Anthropic is not just acting stubbornly. It’s not just refusing to agree to contracting terms. Instead, it’s raising concerns to [DOD] about how [DOD] uses its technology in military missions,” Hamilton said. “What happens if anthropic installs a kill switch or functionality that changes how it functions? That is an unacceptable risk to [DOD].”

In February after Anthropic and the DOD failed to reach an agreement, Trump issued a Truth Social post ordering federal agencies to “immediately cease” all use of Anthropic’s technology.

“WE will decide the fate of our Country — NOT some out-of-control, Radical Left AI company run by people who have no idea what the real World is all about,” Trump wrote.

WATCH: Anthropic sues Trump administration over Pentagon blacklisting

Pentagon ban of Anthropic faces judge; Claude AI maker seeks injunction
Choose CNBC as your preferred source on Google and never miss a moment from the most trusted name in business news.


Chuck Norris fact: the late star’s memes were undefeated



Chuck Norris was more than a legendary martial artist and actor — he was also an internet icon.

The “Walker, Texas Ranger” star, who died suddenly on Thursday at age 86 following a medical emergency, was the subject of hilarious internet memes that highlighted his tough guy, no-nonsense persona.

The memes, which began circulating on the internet in 2005, feature wildly hyperbolic statements about Norris.

“Chuck Norris can pass a vision test with his eyes closed,” is just one of the many viral memes.

Chuck Norris (seen above at the National Rifle Association’s 139th annual meeting in 2010) died on March 19, 2026 at age 86. REUTERS
The actor (pictured above in 2016) passed away after a medical emergency. Getty Images

The jokes turned into such an internet phenomenon that they became known as “Chuck Norris facts” — which even has its own Wikipedia page.

Norris responded to the memes multiple times over the years, choosing to embrace the internet’s fascination with him, rather than be offended.

“I get asked a lot if I like the “Chuck Norris Facts” that circulate. The answer is yes,” Norris wrote in a 2023 Facebook post.

“My wife Gena and I really enjoy them,” he quipped.

Norris, seen above at “The Expendables 2” premiere in 2012, was also known for being an internet icon. FilmMagic
The late star (seen above at Michael Bolton’s Hollywood Walk of Fame star ceremony 2002) was the subject of internet memes that became known as Chuck Norris Facts. AFP via Getty Images

There was also a nod to the facts in the 2012 movie “The Expendables 2,” which features a scene where Sylvester Stallone and Norris’ characters make references to the joke, “Chuck Norris was once bitten by a king cobra. After five days of agonizing pain, the cobra died.”

Keep reading for the best Chuck Norris memes in the wake of his death.

“Chuck Norris doesn’t sleep, he waits”

There’s thousands of memes about Chuck Norris out there on the internet.

One of the most iconic Chuck Norris memes exaggerates Norris’ strength and toughness. It also helps that his muscles are popping out.

“Chuck Norris can win a game of Connect Four in only three movies “

The viral memes started in 2005 and made Norris an internet icon.

The Chuck Norris Connect four meme emphasizes his ability to do the unthinkable. The man could make the impossible, possible.

“Only two things will survive doomsday — cockroaches and Chuck Norris. The roaches, however, will not survive Chuck Norris”

The memes would exaggerate Norris’ tough guy persona.

Norris shirtless, standing in front of a volcano, is another one of his most infamous memes. He can survive anything — even a doomsday scenario.

“When I slice onions, onions cry”

The memes always featured wildly hyperbolic statements about Norris.

Real men cry — but not Chuck Norris. Not even when he’s slicing onions.

“When the Hulk gets really angry, he turns into Chuck Norris”

This meme poked fun at the Hulk turning into Norris.

Is Chuck Norris the real Bruce Banner? The Hulk meme pokes fun at Norris transforming into the green Marvel superhero.

“Chuck Norris kicked the world once. It hasn’t stopped spinning”

Norris would be shirtless sometimes in the memes.

The meme of a ripped Chuck Norris holding weights speaks for itself. The world will never stop spinning because of him.

“Chuck Norris’ blood type is A-K47”

He reacted to the viral memes in a Facebook post in 2023.

The Chuck Norris blood type meme is one of the best. That man loved his guns in his film.

“Guns carry him for protection”

“I get asked a lot if I like the “Chuck Norris Facts” that circulate. The answer is yes,” he said at the time.

The Chuck Norris guns memes were always hits on the internet. Never mess with Chuck.

“When Chuck Norris watches Dora, she doesn’t ask any questions”

“My wife Gena and I really enjoy them,” Norris added.

The Chuck Norris Dora meme exaggerates his self-authority. It implies Dora — and her map — are too scared to ask him to participate in her iconic quests.

“There is no theory of evolution. Just a list of creatures Chuck Norris has allowed to live”

In the 2023 post, Norris also listed his and his wife’s ten favorite Chuck Norris facts that made the couple laugh.

The theory of evolution meme plays up Norris’ individual prowess. It jokes that he outlived a long list of creatures.

“Yo Chuck, Ima let you finish, but… you know what, you go right ahead and finish”

Kanye West interrupting Taylor Swift at the 2009 VMAs became its own Chuck Norris meme.

One meme pokes fun at when Kanye West infamously interrupted Taylor Swift’s acceptance speech at the 2009 VMAs. In this instance, West doesn’t dare do the same to Norris.

“Death once had a near-Chuck Norris experience”

Some of the memes played up Norris often having guns in his movies.

Instead of the normal saying that someone had a near-death experience, this meme flipped the phrase around to joke about Norris being invincible.

“If you have 5 dollars and Chuck Norris has 5 dollars, Chuck Norris now has 10 dollars”

Norris gave a nod to the memes in the 2012 film “The Expendables 2.”

The Chuck Norris money fact jokes that the actor always got what he wanted from others.

“Chuck Norris can never fill out an online form. Because he will never submit”

The movie featured a scene that references the joke, “Chuck Norris was once bitten by a king cobra. After five days of agonizing pain, the cobra died.”

Norris would never “submit” to others in life. The infamous snapshot of him in his blue sleeveless shirt got more exposure thanks to this meme.

“When Chuck Norris left for college he told his father, You’re the man of the house now.”

The Chuch Norris memes have been going strong for over 20 years.

This meme poked fun at Norris always calling the shots, even over his own father at his house.

“Played rock paper scissors in front of a mirror and won”

This meme jokes that Norris won rock paper scissors in front of a mirror.

Norris could do anything, including win a game of rock paper scissors with the mirror.

“Chuck Norris is the reason Waldo is hiding”

Norris sent Waldo into hiding, according to this meme.

This meme, again using Norris in the sleeveless shirt, poked fun at “Where’s Waldo?” Apparently, Norris sent the red-and-white striped shirt character into hiding.

“I use a stunt double for crying scenes”

Norris used a stunt double for his crying scenes, this meme joked.

As this meme indicates, Norris didn’t like shooting crying scenes himself, so his stunt doubles came to his rescue.

Chuck Norris once fought Batman for a bet. The loser got nightshift”

This meme was about a fight between Norris and Batman.

The Chuck Norris vs. Batman meme is one of the best. Of course, it was a losing battle for Batman from the beginning.

“Chuck Norris has a grizzly bear carpet in his room. It’s not dead, it’s just afraid to move”

The Chuck Norris memes are bound to live on after his death.

This meme jokes that Norris had a grizzly bear hiding in his room, acting as a carpet, because of the big bad actor.


Google sells partial stake in fiber business, becomes minority owner of new venture


A technician gets cabling out of his truck to install Google Fiber.

George Frey | Reuters

Google said its fiber internet unit called GFiber is combining with Astound Broadband and forming an independent provider, with Google remaining as a minority shareholder.

The new company will be majority owned by investment firm Stonepeak and led by the existing GFiber executive team, “utilizing their expertise in high-speed fiber innovation to manage the combined network footprint,” Google said in a press release on Wednesday. The transaction is expected to close in the fourth quarter.

Google Fiber, launched in 2010, was an early effort by Google to build ultra-fast fiber-optic broadband networks in the U.S., starting with a gigabit-speed rollout in Kansas City in 2012. Google proposed building gigabit fiber connections to homes, far faster than typical U.S. internet speeds at the time.

Since then, some planned expansions were canceled and the company focused on select markets rather than a costly and time-intensive nationwide rollout.

The spinout comes at a time when demand is growing for high-capacity networks fueled by the increasing popularity of artificial intelligence services. The external capital will help the new entity expand across the country, the company said.

“This partnership with Astound and Stonepeak is the next step in our decade-long mission to redefine what customers can expect from their internet provider,” GFiber CEO Dinni Jain said in the release.

GFiber has been part of Google’s “Other Bets” segment, which includes non-core assets such as the Waymo robotaxi division and drug discovery business Isomorphic Labs. In 2025, the combined segment generated $1.54 billion in revenue, or less than 0.5% of Alphabet’s total sales, and recorded an operating loss of $16.8 billion.

The shift toward fiber infrastructure has become increasingly important as demand grows for networks that can support cloud computing, streaming and emerging AI services. U.S. tech giants are also rolling out a rapidly expanding network of transcontinental subsea cables, seeking to keep pace with growing bandwidth demand.

Astound is a major U.S. cable operator and broadband platform, which was acquired by Stonepeak in 2021 for $8.1 billion. Stonepeak specializes in infrastructure and real estate.

A Google spokesperson didn’t immediately respond to a request for comment.

WATCH: Google’s capacity advantage

Google sells partial stake in fiber business, becomes minority owner of new venture
Choose CNBC as your preferred source on Google and never miss a moment from the most trusted name in business news.


FCC chair slams Amazon for slow satellite launches after it opposed SpaceX data center plan


FCC Chairman Brendan Carr testifies during the House Energy and Commerce Subcommittee on Communications and Technology hearing titled “Oversight of the Federal Communications Commission,” in Rayburn building on Wednesday, January 14, 2026.

Tom Williams | Cq-roll Call, Inc. | Getty Images

Federal Communications Commission Chairman Brendan Carr lashed out at Amazon on Wednesday for opposing SpaceX’s orbital data center plans while it’s falling short of its own satellite “deployment milestone.”

“Amazon should focus on the fact that it will fall roughly 1,000 satellites short of meeting its upcoming deployment milestone, rather than spending their time and resources filing petitions against companies that are putting thousands of satellites in orbit,” Carr wrote in a post on X.

Amazon declined to comment.

Amazon last week urged the FCC to reject a SpaceX application for permission to launch a constellation of up to 1 million low Earth orbit satellites, which would function as a data center network in space to support artificial intelligence projects.

Amazon characterized the application as a “lofty ambition rather than a real plan,” noting SpaceX has provided scant details around how it will “deliver on these grand claims.”

SpaceX’s Starlink service currently dominates the internet-from-space market. Amazon has been vying to compete with Starlink via its Leo satellite service, previously branded as Kuiper. The company has invested more than $10 billion into the effort, and has sent up at least 200 satellites since last April via a variety of launch partners, including Elon Musk’s SpaceX.

In late January, Amazon asked the FCC for a waiver or 24-month extension, to July 2028, to meet a deadline that requires it to deploy roughly 1,600 internet satellites by July 2026. At the time, the company blamed delays beyond its control, including a “shortage in the near-term availability” of rockets and manufacturing disruptions.

Amazon noted in its request that the FCC has previously granted similar extensions. The FCC last month approved a separate petition from Amazon to deploy 4,500 internet satellites, which would more than double the size of its constellation.

Starlink has around 9,000 satellites in orbit today and roughly 9 million customers. It recently received authorization from the FCC to put another 7,500 satellites into orbit.

Scientists have decried the SpaceX proposal to launch one million satellites into orbit, citing a wide range of issues, including light pollution, orbital debris and other harms to the broader orbital environment, as well as increased risk of “Kessler syndrome,” a scenario in which debris and clutter in space can cause a chain reaction that makes low Earth orbit unusable.

Amazon pointed to these concerns from astronomers and environmental groups in its petition, and said SpaceX’s application “risks worsening international backlash” from regulators who are concerned about monopolization of space resources.

“Granting the application would worsen matters further, forcing every other operator in Low-Earth Orbit to plan around a constellation that may never exist, distorting international spectrum and orbital coordination proceedings, and lending regulatory legitimacy to what amounts to a publicity and narrative-shaping exercise,” Amazon wrote in its request to the FCC.

The FCC hasn’t yet approved SpaceX’s request, but in separate remarks to Reuters on Wednesday, Carr said he doesn’t expect Amazon’s petition to “get much traction.”

Carr is a longtime public fan of SpaceX who has mocked environmental concerns from those calling out Musk’s company for launches that harmed public lands and endangered species’ habitat.

He also accused the FCC, under former President Joe Biden, of “regulatory harassment” of SpaceX when the agency found the company’s Starlink WiFi service was not fit at the time to fulfill the program needs of a rural broadband initiative.

FCC chair slams Amazon for slow satellite launches after it opposed SpaceX data center plan
Choose CNBC as your preferred source on Google and never miss a moment from the most trusted name in business news.


5 unresolved questions hanging over the Anthropic–Pentagon fracas: ‘It’s all very puzzling’


Anthropic co-founder and CEO Dario Amodei speaks on an artificial intelligence panel during Inbound 2025 Powered by HubSpot at Moscone Center on in San Francisco, Sept. 4, 2025.

Chance Yeh | Getty Images Entertainment | Getty Images

Defense Secretary Pete Hegseth’s decision to label Anthropic a “Supply-Chain Risk to National Security” on Friday resulted in more questions than answers.

“It’s all very puzzling,” Herbert Lin, a senior research scholar at Stanford University’s Center for International Security and Cooperation, told CNBC in an interview.

Anthropic is the only American company ever to be publicly named a supply chain risk, as the designation has traditionally been used against foreign adversaries. But the company hasn’t received any official declaration beyond social media posts.

A formal designation will require defense vendors and contractors to certify that they don’t use Anthropic’s models in their work with the Pentagon.

The dispute centered around how Anthropic’s artificial intelligence models could be used by the military. The Department of Defense wanted Anthropic to grant the agency unfettered access to its Claude models across all lawful purposes, while Anthropic wanted assurance that its technology would not be tapped for fully autonomous weapons or domestic mass surveillance.

With no agreement reached by Friday’s deadline, President Donald Trump directed federal agencies to “immediately cease” all use of Anthropic’s technology, and said there would be a six-month phaseout period for agencies like the DOD.

Experts told CNBC the supply chain risk designation is highly unusual, especially since the U.S. and Israel began carrying out strikes in Iran just hours later. A group of retired defense officials, policy leaders and executives wrote to Congress on Thursday, defending Anthropic and calling the Trump administration’s designation a “dangerous precedent.”

Anthropic’s models are still being used to support U.S. military operations in Iran, even after the company was blacklisted, as CNBC previously reported.

Talks between Anthropic and the DOD are now reportedly back on, according to the Financial Times, but there are still big questions hanging over the issue as of Thursday.

Why is the U.S. government still using Claude?

Stanford’s Lin doesn’t understand why the DOD is still using Anthropic’s models in sensitive settings if they pose such a threat. If the Trump administration really sees Anthropic as a risk to national security, he said, it wouldn’t make sense to phase out the models over an extended period of time.

“OK, wait a minute, they’re a really dangerous player for U.S. national security, so you’re going to use them for another six months? Huh?” Lin said. 

Michael Horowitz, a senior fellow for technology and innovation at the Council on Foreign Relations, said it’s “especially notable” that Anthropic’s models were used to support the U.S. military action in Iran. He said “there’s no clearer signal” of how much the Pentagon values the technology.

“Even in a situation where there is this intense feud between the company and the Pentagon, they are using their technology in the most important military operation that the United States is conducting,” he said. 

Transitioning away from Anthropic toward a new vendor takes time and comes at a significant cost in terms of efficiency, said Jacquelyn Schneider, a Hargrove Hoover fellow at Stanford University’s Hoover Institution.

Until recently, Anthropic was the only AI company approved to deploy its models across the agency’s classified networks. OpenAI and Elon Musk’s xAI received clearance, but their systems can’t be deployed or adopted overnight.

What’s the actual threat?

The Anthropic logo appears on a smartphone screen with multiple Claude AI logos in the background. Following the release of Claude Opus 4.6 on February 5, Anthropic continues to challenge its main competitors in the generative AI market in Creteil, France, on February 6, 2026.

Samuel Boivin | Nurphoto | Getty Images

By designating Anthropic a supply chain risk, the DOD is suggesting that the company is really bad” for U.S. national security, Lin said. But he stressed that the agency hasn’t clearly outlined what kind of threat the company poses. 

“They don’t point to any technical failing, they don’t point to any hack,” Lin said. “They say things like ‘They’re arrogant,’ and ‘We don’t want you telling the DoD what to do in some hypothetical situation that hasn’t happened yet.'”

Lin said the other punishment that Hegseth was threatening to impose on Anthropic, invoking the Defense Production Act, also contradicts the idea that the company threatens national security. 

The Defense Production Act allows the president to control domestic industries under emergency authority when it’s in the interest of national security. It could essentially compel Anthropic to let the Pentagon use its technology. 

Horowitz said he thinks the clash between Anthropic and the DOD is “masquerading” as a policy dispute. 

Months earlier, venture capitalist and White House AI and crypto czar David Sacks criticized the company for “running a sophisticated regulatory capture strategy based on fear-mongering,” after an essay published by an executive, and conservatives have repeatedly accused Anthropic of pushing “woke AI.”

Anthropic CEO Dario Amodei took a different approach than other tech executives, avoiding getting cozy with the Trump administration in its early days.

“This feels to me like a dispute that is about politics and personalities,” Horowitz said. 

Is an official designation on the way?

U.S. Defense Secretary Pete Hegseth walks on the day of classified briefings for the U.S. Senate and House of Representatives on the situation in Iran, on Capitol Hill in Washington, D.C., U.S., March 3, 2026.

Kylie Cooper | Reuters

Anthropic hasn’t been designated a supply chain risk by any official measure, and there’s an open question as to if or when the company should expect one. Defense contractors have to decide whether they will follow Hegseth’s directive on social media or wait for more formal guidance. 

Several executives told CNBC that their companies are moving away from Anthropic’s models, and a venture capitalist said a number of portfolio companies are switching “out of an abundance of caution.” But others, including C3 AI Chairman Tom Siebel, said he doesn’t see a “need to mitigate” the technology “until it gets litigated.” 

Schneider said businesses are rational, and if they think it’s high risk to work with Anthropic, whether it’s formally declared a supply chain risk or not, they’re going to hedge and look for other partners.

“There’s all sorts of decisions that have been made within the Trump administration that, by law, require more codification,” Schneider said. “Even the example of moving from DoD to [Department of War]. That by law needs more codification, but all the contractors are using DoW.”

Even so, Samir Jain, vice president of policy at the Center for Democracy and Technology, said social media posts likely aren’t enough to actually cause a designation.

“There’s a process that the statute requires, including an actual finding that Anthropic presents national security risks if it’s part of the supply chain,” he said in an interview. “I don’t think, factually, that that predicate could possibly be met here.”

Anthropic said in a statement Friday that it will challenge “any supply chain risk designation in court.”

Does this have anything to do with the U.S. strikes on Iran?

Smoke rises from Israeli bombardment on the southern Lebanese village of Khiam on March 4, 2026.

Rabih Daher | Afp | Getty Images

For Schneider, the war in Iran now looms large over the spat between Anthropic and the DOD. She said she’s left wondering whether the two conflicts were happening in parallel, or if they were somehow related. 

“Obviously, you’re not going to walk away from technologies that are deeply embedded in your wartime processes right before you go to war,” Schneider said.

She said planning a military operation of that magnitude would have required “a lot of sleepless nights,” so she was surprised the DOD was willing to spend such a “remarkable amount of energy” on a public clash ahead of the initial attack.

What happens next?

As the war in Iran stretches into its sixth day, Anthropic’s path forward with the DOD remains a big mystery.  

Horowitz said he would bet that the six-month off-boarding period will become a “a locus for some re-examination” within the Pentagon, especially since members of Congress and broader public markets have shown so much interest in the dispute. 

Lin expressed a similar sentiment, and said he wouldn’t bet on Anthropic’s models being out of the DOD a year from now.

Schneider is less convinced. 

“I wish I had a more definitive thought about where this is all going to go, but everything is so unprecedented,” she said. When it comes to historical examples or analogous cases, Schneider said: “I don’t have those. It’s just super limited.”

The DOD declined to comment. Anthropic didn’t provide a comment.

WATCH: Anthropic tops $19 billion in annual revenue rate

5 unresolved questions hanging over the Anthropic–Pentagon fracas: ‘It’s all very puzzling’


Amazon’s Bahrain data center targeted by Iran for support of U.S. military, state media says


People walk past the logo of Amazon Web Services (AWS) at its exhibitor stall at the India Mobile Congress 2025 at Yashobhoomi, a convention and expo center in New Delhi, India, October 8, 2025.

Anushree Fadnavis | Reuters

Amazon‘s data center in Bahrain was targeted by Iran’s Islamic Revolutionary Guard Corps for the company’s support of the U.S. military, Iranian state media said Wednesday.

The company’s cloud computing unit said Monday that one of its facilities in Bahrain was damaged due to a nearby drone strike on Sunday. Two data centers in the United Arab Emirates were also damaged after they were “directly struck” by drones.

All of the facilities remain offline, according to the Amazon Web Services health dashboard.

The attack in Bahrain was launched “to identify the role of these centers in supporting the enemy’s military and intelligence activities,” Iran’s Fars News Agency said on Telegram.

The incidents came after joint U.S.-Israel strikes on Iran over the weekend. Iran has retaliated against Israeli and U.S. bases across the Gulf.

Amazon declined to comment.

In addition to structural damage, the data centers also experienced power disruptions and some water damage after firefighters worked to put out sparks and fire. Some popular AWS applications experienced “elevated error rates and degraded availability” due to the incident.

AWS advised cloud customers to back up their data, consider migrating their workloads to other regions and direct traffic away from Bahrain and the UAE.

AWS announced its Bahrain region in 2019, and it hosts significant workloads for governments there. The company also operates a corporate office in Bahrain that is primarily for AWS employees.

Earlier this week, Amazon instructed all of its corporate employees in the Middle East to work remotely and “follow local government guidelines” amid escalating instability in the region.

Amazon’s Bahrain data center targeted by Iran for support of U.S. military, state media says