Critical Vulnerability in Cursor AI Code Editor Enables Remote Code Execution via Malicious MCP File Modifications

Cybersecurity researchers have identified a significant security flaw in the Cursor AI code editor, designated as CVE-2025-54136 with a CVSS score of 7.2. This vulnerability, termed MCPoison, allows attackers to achieve remote code execution by exploiting the Model Context Protocol (MCP) server configurations within Cursor.

Understanding the Vulnerability

The MCP is an open standard developed by Anthropic, introduced in November 2024, designed to enable large language models (LLMs) to interact seamlessly with external tools, data, and services. Cursor integrates this protocol to enhance its AI-driven code editing capabilities.

The vulnerability arises from the manner in which Cursor handles modifications to MCP server configurations. An attacker can exploit this by:

1. Introducing a Benign MCP Configuration: The attacker adds an innocuous-looking MCP configuration file (.cursor/rules/mcp.json) to a shared repository.

2. Gaining User Approval: A collaborator or victim pulls the code and approves the MCP configuration within Cursor, trusting its legitimacy.

3. Swapping with Malicious Payload: Post-approval, the attacker replaces the previously approved MCP configuration with a malicious payload, such as a script designed to execute unauthorized commands.

4. Achieving Persistent Code Execution: Each time the victim opens Cursor, the malicious MCP configuration executes, granting the attacker persistent remote code execution capabilities.

The core issue lies in Cursor’s trust model, where once an MCP configuration is approved, it remains trusted indefinitely, even if altered subsequently. This oversight exposes organizations to significant supply chain risks, including potential data breaches and intellectual property theft.

Discovery and Disclosure

The vulnerability was discovered by Check Point Research, who reported it to Cursor on July 16, 2025. Following responsible disclosure protocols, Cursor addressed the issue in version 1.3, released in late July 2025. The update now requires user approval for any modifications to the MCP configuration file, mitigating the risk of unauthorized changes leading to remote code execution.

Broader Implications

This discovery underscores the critical importance of robust security measures in AI-assisted development environments. As AI tools become increasingly integrated into software development workflows, they present new attack vectors that can be exploited if not properly secured.

The Cursor vulnerability is not an isolated incident. Similar issues have been identified in other AI-powered code editors:

– Rules File Backdoor Attack: In March 2025, researchers disclosed a supply chain attack vector affecting AI code editors like GitHub Copilot and Cursor. This technique allowed attackers to inject hidden malicious instructions into configuration files, causing the AI to generate compromised code. By exploiting hidden Unicode characters and sophisticated evasion techniques, threat actors could manipulate the AI to insert malicious code that bypasses typical code reviews. ([thehackernews.com](https://thehackernews.com/2025/03/new-rules-file-backdoor-attack-lets.html?utm_source=openai))

– VSXPloit Vulnerability: In July 2025, a critical zero-day vulnerability was discovered in OpenVSX, the open-source marketplace powering extensions for tools like Cursor. This flaw allowed attackers to gain full control over the entire marketplace, enabling them to push malicious updates under trusted accounts. This could have led to a supply chain disaster, compromising millions of developers’ machines. ([bleepingcomputer.com](https://www.bleepingcomputer.com/news/security/the-zero-day-that-couldve-compromised-every-cursor-and-windsurf-user/?utm_source=openai))

– Malicious VSCode Extension: Also in July 2025, a fake extension for the Cursor AI IDE was found to infect devices with remote access tools and infostealers. In one case, this led to the theft of $500,000 in cryptocurrency from a Russian developer. The malicious extension executed a PowerShell script from a remote host to download additional payloads, granting attackers full remote access to the victim’s computer. ([bleepingcomputer.com](https://www.bleepingcomputer.com/news/security/malicious-vscode-extension-in-cursor-ide-led-to-500k-crypto-theft/?utm_source=openai))

Recommendations for Developers

To safeguard against such vulnerabilities, developers and organizations should adopt the following practices:

1. Regularly Update Software: Ensure that all development tools, especially those integrating AI capabilities, are updated to their latest versions to benefit from security patches.

2. Implement Strict Approval Processes: Establish protocols that require re-approval for any modifications to configuration files, even after initial approval.

3. Monitor for Unauthorized Changes: Utilize monitoring tools to detect and alert on unauthorized changes to critical configuration files.

4. Educate Development Teams: Provide training on the potential risks associated with AI-assisted development tools and the importance of vigilance in approving and monitoring configurations.

5. Limit External Integrations: Be cautious when integrating external tools and extensions, ensuring they come from trusted sources and are regularly reviewed for security.

By implementing these measures, organizations can mitigate the risks associated with AI-powered development tools and protect their software supply chains from potential attacks.