
Anthropic PBC filed two federal lawsuits Monday challenging the Pentagon’s decision to label the artificial intelligence startup a “supply chain risk,” an unprecedented move following the company’s refusal to permit unrestricted military use of its Claude chatbot.
The legal filings, submitted in California federal court and the D.C. Circuit Court of Appeals, accuse the Department of Defense of an “unlawful campaign of retaliation.”
The dispute centers on Anthropic’s internal safety policies, which prohibit its technology from being used for fully autonomous weapons and the mass surveillance of Americans.
The Pentagon’s designation—traditionally reserved for blocking technology from foreign adversaries—effectively bars Anthropic from defense-related contracts.
Defense Secretary Pete Hegseth had previously demanded the company authorize “all lawful uses” of its AI, a directive Anthropic claims violates its protected speech and exceeds federal statutory authority.
The designation comes at a critical time for the San Francisco-based firm.
Despite the friction with the Department of Defense, Claude remains deeply embedded in classified military systems, including those currently deployed in the U.S.-Israeli conflict with Iran.
President Donald Trump has ordered a six-month phase-out of the software across federal agencies.
Legal experts note that this is the first known instance of the U.S. government applying a "supply chain risk" designation to a domestic entity.
While the Defense Department declined to comment on the litigation Monday, the outcome of the case could set a massive precedent for how much control the executive branch can exert over the terms of service of private AI developers.