Anthropic Faces Security Lapses, Legal Battles, Yet Sees User Growth in March 2026

Anthropic’s Turbulent March: Security Breaches, Legal Battles, and Market Resilience

In March 2026, Anthropic, a leading artificial intelligence (AI) company, faced a series of significant challenges that tested its operational integrity and market position. Known for its commitment to AI safety and ethical considerations, Anthropic encountered both internal mishaps and external pressures that have drawn widespread attention.

Security Breaches and Internal Errors

On March 31, 2026, Anthropic inadvertently included a sensitive file in the release of version 2.1.88 of its Claude Code software package. This oversight exposed nearly 2,000 source code files and over 512,000 lines of code, effectively revealing the architectural blueprint of one of its flagship products. Security researcher Chaofan Shou promptly identified the issue and highlighted it on social media. Anthropic responded by attributing the incident to human error during the release process, emphasizing that it was not a result of a security breach.

This incident followed a similar lapse the previous week, where approximately 3,000 internal files were accidentally made publicly accessible. Among these was a draft blog post detailing a powerful new AI model that had not yet been announced. These consecutive errors have raised concerns about the company’s internal security protocols and quality control measures.

Legal Disputes with the Department of Defense

Simultaneously, Anthropic has been embroiled in a legal confrontation with the U.S. Department of Defense (DoD). The conflict centers on Anthropic’s refusal to permit the military unrestricted use of its AI systems, particularly for mass surveillance and fully autonomous weapons. In response, the DoD labeled Anthropic a supply-chain risk, a designation typically reserved for foreign adversaries. This label mandates that any company or agency working with the Pentagon certify that they do not use Anthropic’s models.

Anthropic challenged this designation in court, arguing that it was an unlawful and retaliatory action infringing upon the company’s free speech rights. On March 26, 2026, a federal judge granted Anthropic an injunction against the government’s order, instructing the administration to rescind the supply-chain risk designation and halt directives for federal agencies to sever ties with the company. The court found that the government’s actions likely violated constitutional protections.

Market Resilience Amidst Controversy

Despite these challenges, Anthropic’s consumer-facing product, Claude, has experienced a surge in popularity. Data analysis indicates a significant increase in paid subscribers, with daily active users on mobile devices rising sharply. On March 2, 2026, Claude’s mobile app recorded 149,000 daily downloads in the U.S., surpassing its competitor ChatGPT, which had 124,000 downloads on the same day. This growth suggests that the controversies may have heightened public interest and trust in Anthropic’s commitment to ethical AI practices.

Conclusion

March 2026 has been a pivotal month for Anthropic, marked by internal security lapses and high-profile legal disputes. While these events have tested the company’s resilience, the continued growth of its consumer base indicates a robust market presence. Moving forward, Anthropic will need to reinforce its internal protocols and navigate its legal challenges to maintain its position as a leader in ethical AI development.