U.S. Government Halts Use of Anthropic’s AI Amid National Security Concerns
In a decisive move, the United States government has ordered all federal agencies to cease utilizing Anthropic’s AI model, Claude, citing significant national security risks. This directive marks the first time a domestic technology firm has been labeled a supply chain threat, a designation previously reserved for foreign entities such as Huawei.
On February 28, 2026, President Donald Trump announced via Truth Social that federal agencies must IMMEDIATELY CEASE all use of Anthropic’s technology. Recognizing the extensive integration of Claude within departments like the Department of War (DoW), a six-month period has been granted for these agencies to transition away from Anthropic’s products.
Shortly after the President’s announcement, Defense Secretary Pete Hegseth declared on X (formerly Twitter) that Anthropic is now considered a Supply-Chain Risk to National Security. He stated, No contractor, supplier, or partner that does business with the United States military may conduct any commercial activity with Anthropic.
The core of this dispute lies in Anthropic’s refusal to grant the Pentagon unrestricted access to Claude for specific applications, notably mass domestic surveillance and fully autonomous weapons systems. The Pentagon insisted on full access for all lawful purposes, but Anthropic’s CEO, Dario Amodei, declined, asserting that the company cannot in good conscience accede to such demands.
Anthropic had been a pioneer in deploying AI models on the U.S. government’s classified networks, operating under a $200 million DoW contract since June 2024. Despite prolonged negotiations, the two parties failed to reach an agreement. The Pentagon issued an ultimatum: comply by 5:01 PM ET on Friday or face blacklisting.
Anthropic contends that the Pentagon’s proposed contract, presented as a compromise, included language that could potentially nullify the company’s safeguards. Amodei emphasized that current AI models are not sufficiently reliable for deployment in fully autonomous weapons systems, a stance aimed at protecting both American military personnel and civilians. He also argued that mass surveillance would constitute a fundamental violation of Americans’ civil rights.
The company plans to challenge the supply chain risk designation in court, arguing that the action is legally unsound under 10 USC 3252, which limits the designation’s scope strictly to Department of War contract use, not broader commercial relationships. This means individual customers, API users, and non-DoW contractors remain completely unaffected by the designation.
However, the broader industry impact could be severe. Anthropic depends on cloud computing infrastructure from Amazon, Microsoft, and Google — all of which hold defense contracts. A strict interpretation of Hegseth’s language banning any entity that does business with the military from working with Anthropic could theoretically threaten those cloud relationships. Legal experts have warned the designation sets a dangerous precedent, noting it waters down a tool historically reserved for entities tied to foreign governments.
President Trump has warned Anthropic of major civil and criminal consequences if it fails to cooperate during the phase-out period. Anthropic has stated it remains committed to supporting lawful national security use cases and will work to ensure a smooth transition for U.S. troops and ongoing military operations. The company maintains that no amount of government pressure will shift its position on autonomous weapons or domestic surveillance.