Pentagon Labels Anthropic a Supply Chain Risk Amid AI Military Use Dispute
In a significant escalation of tensions between the U.S. Department of Defense (DoD) and artificial intelligence (AI) firm Anthropic, Secretary of Defense Pete Hegseth has officially designated the company as a supply chain risk. This move follows a protracted negotiation impasse concerning the permissible applications of Anthropic’s AI model, Claude, particularly regarding mass domestic surveillance and the deployment of fully autonomous weapons.
Anthropic has firmly opposed the utilization of its AI technologies for mass surveillance of American citizens and the development of autonomous weaponry. The company stated, No amount of intimidation or punishment from the Department of War will change our position on mass domestic surveillance or fully autonomous weapons.
The dispute reached a new level when President Donald Trump announced via Truth Social that all federal agencies are to phase out the use of Anthropic’s technology within six months. Following this directive, Secretary Hegseth mandated that all military contractors, suppliers, and partners immediately cease any commercial activities with Anthropic. He emphasized, In conjunction with the President’s directive for the Federal Government to cease all use of Anthropic’s technology, I am directing the Department of War to designate Anthropic a Supply Chain Risk to National Security.
The core of the disagreement lies in Anthropic’s insistence on excluding its AI models from applications involving mass domestic surveillance and autonomous weapons. The company supports the use of AI for lawful foreign intelligence and counterintelligence missions but argues that domestic mass surveillance contradicts democratic values and poses significant risks to civil liberties.
Anthropic has also criticized the DoD’s stance that it will only collaborate with AI companies that permit any lawful use of their technology, including removing existing safeguards. A Pentagon memorandum stated, The Department must also utilize models free from usage policy constraints that may limit lawful military applications.
In response to the supply chain risk designation, Anthropic described the action as legally unsound and warned that it sets a dangerous precedent for American companies negotiating with the government. The company noted that, under 10 USC 3252, such a designation applies only to the use of Claude in DoD contracts and does not affect its service to other customers.
The situation has garnered support for Anthropic from within the tech industry. Hundreds of employees at Google and OpenAI have signed an open letter urging their companies to stand with Anthropic in its dispute with the Pentagon over military applications for AI tools like Claude.
This standoff occurs as OpenAI CEO Sam Altman announced an agreement with the DoD to deploy its models within their classified network, requesting that these terms be extended to all AI companies. Altman emphasized that AI safety and the broad distribution of benefits are central to OpenAI’s mission, highlighting prohibitions on domestic mass surveillance and ensuring human responsibility in the use of force, including autonomous weapon systems.