
Anthropic PBC filed two federal lawsuits Monday challenging the Pentagon’s decision to label the artificial intelligence startup a “supply chain risk,” an unprecedented move following the company’s refusal to permit unrestricted military use of its Claude chatbot.
The legal filings, submitted in California federal court and the D.C. Circuit Court of Appeals, accuse the Department of Defense of an “unlawful campaign of retaliation.”
The dispute centers on Anthropic’s internal safety policies, which prohibit its technology from being used for fully autonomous weapons and the mass surveillance of Americans.
The Pentagon’s designation—traditionally reserved for blocking technology from foreign adversaries—effectively bars Anthropic from defense-related contracts.
Defense Secretary Pete Hegseth had previously demanded the company authorize “all lawful uses” of its AI, a directive Anthropic claims violates its protected speech and exceeds federal statutory authority.
The designation comes at a critical time for the San Francisco-based firm.
Despite the friction with the Department of Defense, Claude remains deeply embedded in classified military systems, including those currently deployed in the U.S.-Israeli conflict with Iran.