Anthropic’s Claude AI Faces Controversy Over Military Use Amid Defense Industry Shift

Anthropic’s Claude AI Faces Turmoil Amid Military Use and Defense Industry Exodus

Anthropic’s advanced AI model, Claude, has become a focal point in the ongoing conflict between the United States and Iran. Despite directives from President Trump for civilian agencies to cease using Anthropic products, the Department of Defense (DoD) continues to employ Claude in critical military operations. This situation has led to a complex interplay of governmental directives and industry responses, placing Anthropic in a precarious position.

Governmental Directives and Military Operations

In February 2026, President Trump ordered all civilian agencies to discontinue the use of Anthropic’s products. However, the DoD was granted a six-month period to phase out its reliance on Claude. This grace period coincided with escalating tensions in the Middle East, culminating in a joint U.S.-Israel military operation targeting Tehran. During these operations, Claude played a pivotal role in real-time targeting and prioritization of military objectives. According to reports, the AI system, integrated with Palantir’s Maven platform, suggested hundreds of targets, provided precise location coordinates, and prioritized these targets based on strategic importance.

Despite Secretary of Defense Pete Hegseth’s intentions to designate Anthropic as a supply-chain risk—a move that would typically sever ties with the company—no formal actions have been taken. Consequently, the military continues to utilize Claude without legal impediments.

Defense Industry’s Response

While the military maintains its use of Claude, the broader defense industry is rapidly distancing itself from Anthropic’s AI solutions. Major defense contractors, including Lockheed Martin, have initiated the replacement of Claude with alternative AI models. This shift is not isolated; numerous subcontractors are also transitioning away from Anthropic’s offerings. A managing partner at J2 Ventures revealed that ten of his portfolio companies have ceased using Claude for defense applications and are actively seeking replacements.

Underlying Tensions and Ethical Considerations

The discord between Anthropic and the Pentagon stems from fundamental disagreements over the permissible uses of AI technology. Anthropic has consistently opposed the deployment of Claude for mass domestic surveillance and fully autonomous weaponry. This stance has led to strained relations with the DoD, which seeks broader applications for AI in military contexts.

In February 2026, Defense Secretary Hegseth summoned Anthropic CEO Dario Amodei to discuss these issues. The Pentagon has threatened to label Anthropic as a supply-chain risk or invoke the Defense Production Act to compel the company to tailor Claude to military specifications. Despite these pressures, Anthropic remains steadfast in its ethical guidelines, refusing to compromise on the use of its AI technology.

Public Perception and Market Impact

The high-profile dispute between Anthropic and the Pentagon has inadvertently boosted Claude’s visibility among the general public. Following media coverage of the conflict, Claude’s chatbot application surged to the top of the App Store charts, surpassing competitors like OpenAI’s ChatGPT. This surge indicates a growing public interest in AI technologies and the ethical considerations surrounding their use.

Future Implications

The ongoing situation presents a complex challenge for Anthropic. While the company’s commitment to ethical AI use has garnered public support, its stance has led to significant losses in defense contracts and strained relationships with government agencies. The potential designation of Anthropic as a supply-chain risk could have far-reaching legal and financial consequences.

As the U.S. military continues its operations in Iran with Claude’s assistance, the broader defense industry is rapidly adapting, seeking AI solutions that align with both operational requirements and ethical standards. Anthropic’s experience underscores the delicate balance between technological innovation, ethical responsibility, and national security interests.