Hacking Groups Exploit OpenClaw AI Framework Vulnerabilities to Steal API Keys and Spread Malware

OpenClaw Under Siege: Hacking Groups Exploit AI Framework to Steal API Keys and Deploy Malware

In a significant cybersecurity development, multiple hacking groups have targeted OpenClaw, an open-source autonomous AI framework, to deploy malicious payloads and steal sensitive information. OpenClaw, formerly known as MoltBot and ClawdBot, was developed by Peter Steinberger, now affiliated with OpenAI. Its rapid adoption in late January 2026 has made it a prime target for cybercriminals.

The Appeal of OpenClaw to Cybercriminals

OpenClaw’s architecture provides extensive system privileges, persistent memory access, and integration with critical services. These features, while beneficial for legitimate users, also present opportunities for malicious actors to exploit the system for credential theft and data exfiltration.

Rapid Exploitation Post-Deployment

Within 72 hours of OpenClaw’s widespread deployment, threat actors began exploiting several critical vulnerabilities:

– Remote Code Execution (RCE) Vulnerability (CVE-2026-25253): This high-risk flaw allows attackers to execute arbitrary code on vulnerable systems, leading to unauthorized access and control.

– Supply Chain Poisoning: Attackers infiltrated the software supply chain, introducing malicious code into legitimate software components, thereby compromising the integrity of the system.

– Credential Harvesting via Exposed Administrative Interfaces: Unsecured administrative interfaces were exploited to harvest sensitive credentials, granting attackers further access to systems.

Analysts from Flare have identified over 30,000 compromised OpenClaw instances. These instances have been utilized to steal API keys, intercept messages, and distribute information-stealing malware through platforms like Telegram and other malicious communication channels.

The ClawHavoc Campaign: A Notable Attack

One of the most damaging campaigns, termed ClawHavoc, was detected on January 29, 2026. This supply chain attack involved the mass deployment of malicious payloads disguised as legitimate tools.

The attackers used the alias Hightower6eu to automate the deployment of these malicious payloads. They masqueraded as legitimate crypto tools, leading unsuspecting users to install malware capable of full-service compromise. This allowed attackers to extract persistent memory data and move laterally across enterprise systems.

Automated Skill Poisoning Through ClawHub

By early February, a second campaign emerged, focusing on the OpenClaw community marketplace, ClawHub. Due to the platform’s open publishing model and lack of code review, attackers uploaded backdoored skills from seemingly trustworthy GitHub accounts, including Hightower6eu.

These malicious updates executed remote shell commands, enabling attackers to exfiltrate OAuth tokens, passwords, and API keys in real time.

The Extent of the Compromise

A Shodan scan conducted on February 18, 2026, revealed over 312,000 OpenClaw instances running on the default port 18789. Alarmingly, many of these instances lacked authentication and were accessible over the internet, highlighting a significant attack surface for opportunistic probing.

Exposed administrative interfaces have exacerbated the crisis. Honeypot deployments recorded exploitation attempts within minutes of exposure, indicating the high level of interest and activity from threat actors.

Implications for Autonomous AI Security

The incidents involving OpenClaw underscore a critical turning point in the security of autonomous AI agents. Organized threat groups have rapidly adapted, weaponizing an ecosystem that prioritized capability over cybersecurity.

As OpenAI integrates OpenClaw’s developer, experts emphasize the urgent need for security-by-design approaches in future AI frameworks. This includes implementing robust security measures from the outset to prevent similar vulnerabilities and exploits.

Recommendations for Organizations

In light of these developments, organizations utilizing or testing autonomous assistants are advised to:

– Secure API Credentials: Implement stringent measures to protect API keys and other sensitive credentials from unauthorized access.

– Isolate AI Workloads: Ensure that AI workloads are isolated from critical systems to minimize potential damage in the event of a compromise.

– Regularly Update and Patch Systems: Stay vigilant by applying the latest security patches and updates to mitigate known vulnerabilities.

– Conduct Thorough Code Reviews: Implement rigorous code review processes, especially for third-party integrations, to detect and prevent the introduction of malicious code.

By adopting these measures, organizations can enhance their security posture and reduce the risk of falling victim to similar exploitation campaigns.