The Pentagon in Arlington, Va., Aug. 27, 2025. The U.S. government said on Tuesday, March 17, 2026, that it had deemed the artificial intelligence company Anthropic an “unacceptable risk” to national security because the start-up could disable or alter its technology to suit its own interests, rather than the country’s priorities, in a time of war. (Tierney L. Cross/The New York Times)
Share
|
Getting your Trinity Audio player ready...
|
SAN FRANCISCO — The U.S. government said Tuesday that it had deemed the artificial intelligence company Anthropic an “unacceptable risk” to national security because the startup could disable or alter its technology to suit its own interests, rather than the country’s priorities, in a time of war.
In a 40-page filing in U.S. District Court for the Northern District of California, lawyers for the government said they questioned whether Anthropic was a “trusted partner,” especially given that AI systems “are acutely vulnerable to manipulation.”
Giving Anthropic access to the Defense Department’s warfighting infrastructure would therefore “introduce unacceptable risk into DoW supply chains,” the government said, referring to the Department of War, which is the Trump administration’s favored term.
Anthropic pointed to comments that Dario Amodei, its CEO, had made about how the military — not his company — decides how its technologies are used.
“We have never raised objections to particular military operations nor attempted to limit use of our technology in an ad hoc manner,” Amodei said in a Feb. 26 statement.
The filing was the government’s first response to lawsuits from Anthropic, a leading AI company based in San Francisco that makes the Claude chatbot. On March 9, Anthropic filed two lawsuits — one in the same court and the other in the U.S. Court of Appeals for the District of Columbia Circuit — to challenge Defense Secretary Pete Hegseth’s decision last month to label it a “supply chain risk.”
Hegseth acted after the Pentagon battled with the company over a $200 million contract for the use of AI in classified systems. During negotiations for the contract, Anthropic had said it did not want its AI used for mass surveillance of Americans or with autonomous lethal weapons. The Pentagon countered that it was not up to a private company to tell it how to use the technology.
When the two sides could not agree, Hegseth said Anthropic posed a supply chain risk, a move that effectively cuts the company off from working with the U.S. government. The label was previously used only to bar foreign companies that posed a national security risk.
In its lawsuits, Anthropic accused the Pentagon of using the label to punish it on ideological grounds and said its First Amendment rights were being violated. The company has asked the judge in the California court to block the government’s designation. More than 100 business customers might stop working with Anthropic because of the risk designation, the company has said, potentially leading it to lose billions of dollars in revenue.
A hearing on Anthropic’s request for a preliminary injunction is set for next Tuesday.
In the filing Tuesday, the government said its dispute with Anthropic stemmed from the company’s behavior during the contract negotiations, and not the limits for its technology that the startup had laid out around mass surveillance and autonomous weapons. It added that the Pentagon was simply exercising its authority to choose vendors.
Government lawyers also addressed Anthropic’s First Amendment argument, saying it was “not a license to unilaterally impose contract terms on the government, and Anthropic cites nothing to support such a radical conclusion.”
Anthropic began providing its AI technology last year in a pilot program established by the Pentagon. Two defense officials said the military had continued using Anthropic to help analyze intelligence as the war in Iran entered its third week.
Other tech companies and legal rights groups have filed legal briefs to support Anthropic in its lawsuits. On Monday, the American Civil Liberties Union and the Center for Democracy and Technology filed a brief arguing that Anthropic was protected by the First Amendment in speaking up against the Pentagon about its AI technology.
Microsoft also filed a friend-of-the-court brief, urging a federal court to temporarily block the Pentagon’s designation of Anthropic as a supply chain risk. Thirty-seven engineers and researchers from OpenAI and Google, including Jeff Dean, Google’s chief scientist, also filed a brief supporting Anthropic.
—
This article originally appeared in The New York Times.
By Sheera Frenkel/Tierney L. Cross
c. 2026 The New York Times Company
RELATED TOPICS:
Categories
Sanger Police Cite 15 Drivers in Traffic Enforcement Operation
Merced County Teacher Arrested on Child Pornography Charges
US Says Anthropic Is an ‘Unacceptable’ National Security Risk





