Critical Vulnerability in Anthropic’s Model Context Protocol Enables Remote Code Execution, Impacting AI Systems Globally

Critical Design Flaw in Anthropic’s MCP Exposes AI Systems to Remote Code Execution

Recent cybersecurity research has unveiled a significant vulnerability within Anthropic’s Model Context Protocol (MCP), a foundational component in numerous artificial intelligence (AI) applications. This inherent design flaw permits remote code execution (RCE), potentially compromising sensitive data and threatening the integrity of the AI supply chain.

Understanding the Vulnerability

The MCP serves as a bridge between large language models (LLMs) and external tools or data sources, facilitating seamless integration and interaction. However, researchers from OX Security have identified a critical weakness in the MCP’s architecture, particularly concerning its standard input/output (STDIO) transport interface. This flaw allows unauthorized command execution on systems utilizing vulnerable MCP implementations, granting attackers access to confidential user information, internal databases, API keys, and chat histories.

Scope of the Issue

The vulnerability is embedded within Anthropic’s official MCP software development kit (SDK), affecting all supported programming languages, including Python, TypeScript, Java, and Rust. This widespread issue impacts over 7,000 publicly accessible servers and software packages, collectively accounting for more than 150 million downloads.

Specific Vulnerabilities Identified

The unsafe default configurations in the MCP’s STDIO interface have led to the discovery of multiple vulnerabilities across various projects:

– CVE-2025-65720 (GPT Researcher)

– CVE-2026-30623 (LiteLLM) – Patched

– CVE-2026-30624 (Agent Zero)

– CVE-2026-30618 (Fay Framework)

– CVE-2026-33224 (Bisheng) – Patched

– CVE-2026-30617 (Langchain-Chatchat)

– CVE-2026-33224 (Jaaz)

– CVE-2026-30625 (Upsonic)

– CVE-2026-30615 (Windsurf)

– CVE-2026-26015 (DocsGPT) – Patched

– CVE-2026-40933 (Flowise)

These vulnerabilities fall into four primary categories, each capable of triggering remote command execution on the server:

1. Unauthenticated and authenticated command injection via MCP STDIO

2. Unauthenticated command injection through direct STDIO configuration with hardening bypass

3. Unauthenticated command injection via MCP configuration edits through zero-click prompt injection

4. Unauthenticated command injection through MCP marketplaces via network requests, activating hidden STDIO configurations

Technical Insights

The core issue lies in the MCP’s design, which allows direct configuration-to-command execution via the STDIO interface across all implementations, regardless of the programming language. Originally intended to initiate a local STDIO server and provide a handle back to the LLM, this design inadvertently permits the execution of arbitrary operating system commands. If the command successfully creates an STDIO server, it returns the handle; otherwise, it executes the command and returns an error.

Historical Context

Similar vulnerabilities stemming from this fundamental issue have been reported independently over the past year, including:

– CVE-2025-49596 (MCP Inspector)

– CVE-2026-22252 (LibreChat)

– CVE-2026-22688 (WeKnora)

– CVE-2025-54994 (@akoskm/create-mcp-server-stdio)

– CVE-2025-54136 (Cursor)

Despite these reports, Anthropic has chosen not to alter the protocol’s architecture, citing the behavior as expected. While some vendors have released patches, the fundamental issue remains unaddressed in Anthropic’s MCP reference implementation, leading developers to inherit these code execution risks.

Implications for the AI Supply Chain

This discovery underscores how AI-powered integrations can inadvertently expand the attack surface, posing significant risks to the AI supply chain. The widespread adoption of MCP across various platforms means that a single architectural decision can propagate vulnerabilities throughout the ecosystem, affecting numerous projects and services.

Recommended Mitigation Strategies

To mitigate the risks associated with this vulnerability, the following measures are advised:

1. Restrict Public Access: Block public IP access to sensitive services to prevent unauthorized external connections.

2. Monitor MCP Tool Usage: Implement monitoring to detect and respond to unusual MCP tool invocations.

3. Sandbox MCP Services: Run MCP-enabled services within a sandboxed environment to limit potential damage from exploits.

4. Treat External Inputs as Untrusted: Handle external MCP configuration inputs with caution, treating them as untrusted to prevent injection attacks.

5. Verify MCP Servers: Only install MCP servers from verified and trusted sources to ensure integrity.

Conclusion

The revelation of this critical design flaw in Anthropic’s MCP highlights the importance of rigorous security assessments in AI development. As AI systems become increasingly integrated into various applications, ensuring the security of underlying protocols like MCP is paramount to safeguarding sensitive data and maintaining trust in AI technologies.