5 unresolved questions hanging over the Anthropic–Pentagon fracas: ‘It’s all very puzzling’


Anthropic co-founder and CEO Dario Amodei speaks on an artificial intelligence panel during Inbound 2025 Powered by HubSpot at Moscone Center on in San Francisco, Sept. 4, 2025.

Chance Yeh | Getty Images Entertainment | Getty Images

Defense Secretary Pete Hegseth’s decision to label Anthropic a “Supply-Chain Risk to National Security” on Friday resulted in more questions than answers.

“It’s all very puzzling,” Herbert Lin, a senior research scholar at Stanford University’s Center for International Security and Cooperation, told CNBC in an interview.

Anthropic is the only American company ever to be publicly named a supply chain risk, as the designation has traditionally been used against foreign adversaries. But the company hasn’t received any official declaration beyond social media posts.

A formal designation will require defense vendors and contractors to certify that they don’t use Anthropic’s models in their work with the Pentagon.

The dispute centered around how Anthropic’s artificial intelligence models could be used by the military. The Department of Defense wanted Anthropic to grant the agency unfettered access to its Claude models across all lawful purposes, while Anthropic wanted assurance that its technology would not be tapped for fully autonomous weapons or domestic mass surveillance.

With no agreement reached by Friday’s deadline, President Donald Trump directed federal agencies to “immediately cease” all use of Anthropic’s technology, and said there would be a six-month phaseout period for agencies like the DOD.

Experts told CNBC the supply chain risk designation is highly unusual, especially since the U.S. and Israel began carrying out strikes in Iran just hours later. A group of retired defense officials, policy leaders and executives wrote to Congress on Thursday, defending Anthropic and calling the Trump administration’s designation a “dangerous precedent.”

Anthropic’s models are still being used to support U.S. military operations in Iran, even after the company was blacklisted, as CNBC previously reported.

Talks between Anthropic and the DOD are now reportedly back on, according to the Financial Times, but there are still big questions hanging over the issue as of Thursday.

Why is the U.S. government still using Claude?

Stanford’s Lin doesn’t understand why the DOD is still using Anthropic’s models in sensitive settings if they pose such a threat. If the Trump administration really sees Anthropic as a risk to national security, he said, it wouldn’t make sense to phase out the models over an extended period of time.

“OK, wait a minute, they’re a really dangerous player for U.S. national security, so you’re going to use them for another six months? Huh?” Lin said. 

Michael Horowitz, a senior fellow for technology and innovation at the Council on Foreign Relations, said it’s “especially notable” that Anthropic’s models were used to support the U.S. military action in Iran. He said “there’s no clearer signal” of how much the Pentagon values the technology.

“Even in a situation where there is this intense feud between the company and the Pentagon, they are using their technology in the most important military operation that the United States is conducting,” he said. 

Transitioning away from Anthropic toward a new vendor takes time and comes at a significant cost in terms of efficiency, said Jacquelyn Schneider, a Hargrove Hoover fellow at Stanford University’s Hoover Institution.

Until recently, Anthropic was the only AI company approved to deploy its models across the agency’s classified networks. OpenAI and Elon Musk’s xAI received clearance, but their systems can’t be deployed or adopted overnight.

What’s the actual threat?

The Anthropic logo appears on a smartphone screen with multiple Claude AI logos in the background. Following the release of Claude Opus 4.6 on February 5, Anthropic continues to challenge its main competitors in the generative AI market in Creteil, France, on February 6, 2026.

Samuel Boivin | Nurphoto | Getty Images

By designating Anthropic a supply chain risk, the DOD is suggesting that the company is really bad” for U.S. national security, Lin said. But he stressed that the agency hasn’t clearly outlined what kind of threat the company poses. 

“They don’t point to any technical failing, they don’t point to any hack,” Lin said. “They say things like ‘They’re arrogant,’ and ‘We don’t want you telling the DoD what to do in some hypothetical situation that hasn’t happened yet.'”

Lin said the other punishment that Hegseth was threatening to impose on Anthropic, invoking the Defense Production Act, also contradicts the idea that the company threatens national security. 

The Defense Production Act allows the president to control domestic industries under emergency authority when it’s in the interest of national security. It could essentially compel Anthropic to let the Pentagon use its technology. 

Horowitz said he thinks the clash between Anthropic and the DOD is “masquerading” as a policy dispute. 

Months earlier, venture capitalist and White House AI and crypto czar David Sacks criticized the company for “running a sophisticated regulatory capture strategy based on fear-mongering,” after an essay published by an executive, and conservatives have repeatedly accused Anthropic of pushing “woke AI.”

Anthropic CEO Dario Amodei took a different approach than other tech executives, avoiding getting cozy with the Trump administration in its early days.

“This feels to me like a dispute that is about politics and personalities,” Horowitz said. 

Is an official designation on the way?

U.S. Defense Secretary Pete Hegseth walks on the day of classified briefings for the U.S. Senate and House of Representatives on the situation in Iran, on Capitol Hill in Washington, D.C., U.S., March 3, 2026.

Kylie Cooper | Reuters

Anthropic hasn’t been designated a supply chain risk by any official measure, and there’s an open question as to if or when the company should expect one. Defense contractors have to decide whether they will follow Hegseth’s directive on social media or wait for more formal guidance. 

Several executives told CNBC that their companies are moving away from Anthropic’s models, and a venture capitalist said a number of portfolio companies are switching “out of an abundance of caution.” But others, including C3 AI Chairman Tom Siebel, said he doesn’t see a “need to mitigate” the technology “until it gets litigated.” 

Schneider said businesses are rational, and if they think it’s high risk to work with Anthropic, whether it’s formally declared a supply chain risk or not, they’re going to hedge and look for other partners.

“There’s all sorts of decisions that have been made within the Trump administration that, by law, require more codification,” Schneider said. “Even the example of moving from DoD to [Department of War]. That by law needs more codification, but all the contractors are using DoW.”

Even so, Samir Jain, vice president of policy at the Center for Democracy and Technology, said social media posts likely aren’t enough to actually cause a designation.

“There’s a process that the statute requires, including an actual finding that Anthropic presents national security risks if it’s part of the supply chain,” he said in an interview. “I don’t think, factually, that that predicate could possibly be met here.”

Anthropic said in a statement Friday that it will challenge “any supply chain risk designation in court.”

Does this have anything to do with the U.S. strikes on Iran?

Smoke rises from Israeli bombardment on the southern Lebanese village of Khiam on March 4, 2026.

Rabih Daher | Afp | Getty Images

For Schneider, the war in Iran now looms large over the spat between Anthropic and the DOD. She said she’s left wondering whether the two conflicts were happening in parallel, or if they were somehow related. 

“Obviously, you’re not going to walk away from technologies that are deeply embedded in your wartime processes right before you go to war,” Schneider said.

She said planning a military operation of that magnitude would have required “a lot of sleepless nights,” so she was surprised the DOD was willing to spend such a “remarkable amount of energy” on a public clash ahead of the initial attack.

What happens next?

As the war in Iran stretches into its sixth day, Anthropic’s path forward with the DOD remains a big mystery.  

Horowitz said he would bet that the six-month off-boarding period will become a “a locus for some re-examination” within the Pentagon, especially since members of Congress and broader public markets have shown so much interest in the dispute. 

Lin expressed a similar sentiment, and said he wouldn’t bet on Anthropic’s models being out of the DOD a year from now.

Schneider is less convinced. 

“I wish I had a more definitive thought about where this is all going to go, but everything is so unprecedented,” she said. When it comes to historical examples or analogous cases, Schneider said: “I don’t have those. It’s just super limited.”

The DOD declined to comment. Anthropic didn’t provide a comment.

WATCH: Anthropic tops $19 billion in annual revenue rate

5 unresolved questions hanging over the Anthropic–Pentagon fracas: ‘It’s all very puzzling’