Critical Vulnerabilities in Claude Code Expose Systems to Remote Attacks and API Key Theft
Recent discoveries have unveiled critical security flaws in Anthropic’s Claude Code, an AI-driven command-line development tool. These vulnerabilities could allow attackers to achieve Remote Code Execution (RCE) and exfiltrate Anthropic API keys by exploiting project configuration files. Check Point Research (CPR) identified these issues, and Anthropic has since patched all vulnerabilities prior to public disclosure.
Understanding Claude Code and Its Configuration
Claude Code enables developers to delegate tasks directly from their terminal, streamlining the development process. To facilitate team collaboration, it supports project-level configurations through a `.claude/settings.json` file stored within the repository. This file is inherited when a repository is cloned, meaning any contributor with commit access can modify it. CPR discovered that malicious configurations could trigger unintended actions on a developer’s machine, effectively turning a passive setup file into an execution vector.
Vulnerability 1: RCE via Untrusted Project Hooks
Anthropic’s Hooks feature allows users to define commands that execute automatically at specific points in Claude Code’s lifecycle, such as formatting code after an edit. These hooks are defined in the repository-controlled `.claude/settings.json`. CPR found that when cloning an untrusted repository with a malicious hook configured to trigger on `SessionStart`, Claude Code executed the command immediately upon initialization. While the tool presented a general trust dialog, it did not explicitly warn that hook commands were already running in the background without user confirmation. This allowed attackers to execute arbitrary shell commands, such as establishing a reverse shell.
Vulnerability 2: RCE Using MCP Consent Bypass (CVE-2025-59536)
Claude Code supports the Model Context Protocol (MCP) to interact with external tools, configured via an `.mcp.json` file. Following CPR’s initial report, Anthropic implemented a warning dialog for MCP initialization. However, CPR found a bypass using two settings in `.claude/settings.json`: `enableAllProjectMcpServers` and `enabledMcpjsonServers`. By leveraging these settings to auto-approve MCP servers, CPR executed malicious commands immediately upon running Claude before the user could interact with the trust dialog, once again enabling RCE.
Vulnerability 3: API Key Exfiltration (CVE-2026-21852)
Further investigation into `.claude/settings.json` revealed that environment variables could also be defined. CPR targeted `ANTHROPIC_BASE_URL`, which controls the endpoint for Claude Code API communications. By pointing this URL to a malicious server, an attacker could intercept the tool’s initial API requests. CPR observed that before the user even interacted with the trust dialog, Claude Code transmitted the full Anthropic API key in plaintext within the authorization header. With a stolen API key, attackers could perform billing fraud or access shared Claude Workspaces. While files in a workspace cannot be downloaded after manual upload, CPR bypassed this by using the code execution tool to regenerate the file, making it downloadable and exposing sensitive information.
Anthropic’s Response and Recommendations
Anthropic responded promptly to CPR’s findings, implementing fixes to strengthen user trust prompts, block execution of external tools without explicit approval, and prevent API communications until trust is confirmed. These measures aim to mitigate the risks associated with the identified vulnerabilities.
This incident underscores the evolving threat landscape in AI-assisted development tools. As these tools become more integrated into enterprise workflows, repository configuration files can no longer be treated as passive settings. They now influence execution, networking, and permissions, meaning the risk extends beyond running untrusted code to simply opening an untrusted project.
Organizations must update their security controls to address the blurred trust boundaries introduced by AI-driven automation. Developers are advised to exercise caution when cloning repositories, especially from untrusted sources, and to review configuration files for potential malicious entries. Regularly updating tools and applying patches promptly can also help mitigate such vulnerabilities.