
The Trump administration formally defended the Pentagon’s blacklisting of Anthropic in a Tuesday court filing, asserting that the designation of the artificial intelligence firm as a national security threat was both justified and lawful.
The filing serves as the government's first major response to a high-stakes lawsuit brought by the maker of the Claude AI assistant, which seeks to overturn the restrictive designation.
Defense Secretary Pete Hegseth designated Anthropic a "national security supply chain risk" on March 3, 2026, after months of negotiations reached an impasse.
The dispute centers on Anthropic’s refusal to remove safety "guardrails" that prevent its technology from being integrated into fully autonomous weapons systems or used for domestic mass surveillance.
While Anthropic contends the blacklist is a retaliatory strike against its protected corporate speech and safety principles, the Justice Department argued that the dispute is a matter of procurement and conduct.
"It was only when Anthropic refused to release the restrictions on the use of its products — which refusal is conduct, not protected speech — that the President directed all federal agencies to terminate their business relationships with Anthropic," the administration stated in the filing.
The government further maintained that no one has restricted the company's "expressive activity," but rather that the state has the authority to exclude vendors who do not meet military operational requirements.
The fallout from the designation is potentially catastrophic for the San Francisco-based startup.
Although President Trump backed the move to exclude Anthropic from certain military contracts, the broader implications could cause the company to lose billions of dollars in revenue this year as enterprise partners and other federal agencies distance themselves from the "supply chain risk" label.
Anthropic executives have warned that the move threatens the company’s reputation and its standing as a trusted partner in the global tech sector.
Anthropic’s legal team, currently arguing the case in a California federal court, has requested an emergency stay to block the Pentagon's decision while the litigation proceeds.
The company maintains that today’s AI models are not yet reliable enough for lethal autonomy and that opposing domestic surveillance is a fundamental principle.
Some legal experts have noted that the company may have a strong case regarding government overreach, as the "supply chain risk" designation has historically been reserved for foreign adversaries rather than domestic firms.