
Anthropic CEO Dario Amodei publicly rejected a Pentagon ultimatum demanding unrestricted military use of the company’s Claude AI system ahead of a Friday deadline tied to a $200 million contract.
The United States Department of Defense threatened to remove Anthropic from military systems, designate it a supply chain risk and potentially invoke the Defense Production Act of 1950 to compel access if the company refused.
“Regardless, these threats do not change our position: we cannot in good conscience accede to their request,”
Amodei wrote, arguing that fully autonomous weapons systems remain technically unreliable and ethically problematic.
The dispute centres on two safeguards Anthropic imposed, namely banning autonomous targeting of enemy combatants and prohibiting mass surveillance of US citizens, conditions the Pentagon considers unacceptable for lawful military operations.
Defense spokesman Sean Parnell warned that the department “will not let ANY company dictate the terms regarding how we make operational decisions,” escalating the standoff into a public confrontation over government authority and private tech control.
The competitive pressure is intensifying, with Elon Musk’s xAI reportedly accepting broader classified use terms while OpenAI and Google accelerate their own defence negotiations, threatening Anthropic’s early advantage in classified AI access.
The episode raises broader implications beyond artificial intelligence, as potential use of the Defense Production Act against a private tech firm could establish precedent for compelling modifications to other technologies, including privacy-focused crypto infrastructure.
For crypto markets, the case underscores the vulnerability of centralised providers to state pressure and may strengthen arguments for decentralised AI and blockchain systems designed to resist unilateral government intervention.