Anthropic Challenges Pentagon’s Supply Chain Risk Designation in Court
Anthropic, a leading artificial intelligence (AI) firm, has announced its intention to legally contest the U.S. Department of Defense’s (DOD) recent classification of the company as a supply chain risk. This designation, which CEO Dario Amodei describes as legally unsound, could potentially restrict Anthropic from engaging in contracts with the Pentagon and its affiliates.
The dispute centers on the extent of military oversight and utilization of AI technologies. Anthropic has firmly opposed the use of its AI systems for mass surveillance of American citizens and the deployment of fully autonomous weapons systems. Conversely, the Pentagon advocates for unrestricted access to AI technologies for all lawful purposes, leading to a fundamental disagreement between the two entities.
Amodei emphasizes that the majority of Anthropic’s clientele will remain unaffected by this designation. He clarifies that the restriction pertains specifically to the use of Anthropic’s AI model, Claude, in direct association with DOD contracts. Other applications of Claude by clients with DOD affiliations are not impacted.
In anticipation of the forthcoming legal proceedings, Amodei points out that the DOD’s supply chain risk designation is intended to safeguard governmental interests rather than penalize suppliers. He notes that the law mandates the Secretary of Defense to employ the least restrictive means necessary to protect the supply chain. Therefore, even for DOD contractors, the designation does not limit the use of Claude or business relationships with Anthropic unrelated to specific DOD contracts.
The relationship between Anthropic and the DOD has been strained over recent weeks. Productive discussions were reportedly disrupted following the leak of an internal memo from Amodei, in which he criticized competitor OpenAI’s collaboration with the DOD as safety theater. This leak has further complicated the ongoing negotiations.
In response to the DOD’s designation, OpenAI has entered into an agreement to collaborate with the Pentagon, a decision that has elicited criticism from within OpenAI’s own workforce. Amodei has issued an apology for the leaked memo, asserting that the company did not intentionally disseminate the document or instruct others to do so. He acknowledges that the memo was composed during a particularly challenging period for the company and does not reflect his considered views.
Amodei concludes by reaffirming Anthropic’s commitment to supporting American military personnel and national security experts. The company is currently assisting U.S. operations in Iran and pledges to continue providing its AI models to the DOD at minimal cost to facilitate a smooth transition.
Challenging the DOD’s designation in federal court presents significant hurdles. The relevant legislation grants the Pentagon considerable discretion in matters of national security, making it difficult for companies to contest such decisions. However, experts suggest that while the bar is high, it is not insurmountable.