Infostealer Malware Targets OpenClaw AI, Exposes Sensitive Config Data and Raises Security Concerns

Infostealer Malware Targets OpenClaw AI Agent Configurations, Exposing Sensitive Data

In a significant development highlighting the evolving landscape of cyber threats, cybersecurity researchers have identified an information stealer malware that successfully exfiltrated configuration files from a user’s OpenClaw AI agent. OpenClaw, previously known as Clawdbot and Moltbot, is an open-source autonomous AI assistant designed to operate locally on user devices, integrating seamlessly with various messaging platforms.

This incident underscores a pivotal shift in the behavior of infostealer malware, transitioning from merely harvesting browser credentials to targeting the core operational data of personal AI agents. Hudson Rock, the cybersecurity firm that uncovered this breach, emphasized the gravity of this evolution, noting that attackers are now aiming to capture the souls and identities of AI assistants.

Alon Gal, Chief Technology Officer at Hudson Rock, indicated that the malware involved is likely a variant of Vidar, an off-the-shelf information stealer active since late 2018. Unlike specialized modules designed to target OpenClaw specifically, this malware employed a broad file-grabbing routine. It systematically searched for files with particular extensions and within specific directories known to contain sensitive information.

The compromised files include:

– openclaw.json: This file contains critical details related to the OpenClaw gateway token, along with the user’s email address and workspace path.

– device.json: This file holds cryptographic keys essential for secure pairing and signing operations within the OpenClaw ecosystem.

– soul.md: This document outlines the agent’s core operational principles, behavioral guidelines, and ethical boundaries.

The theft of the gateway authentication token is particularly concerning. If the relevant port is exposed, an attacker could remotely connect to the victim’s local OpenClaw instance. Even if the port isn’t exposed, the attacker could masquerade as the client in authenticated requests to the AI gateway, potentially leading to unauthorized actions and data breaches.

Hudson Rock highlighted the inadvertent success of the malware in capturing the entire operational context of the user’s AI assistant. As AI agents like OpenClaw become more integrated into professional workflows, it’s anticipated that infostealer developers will craft dedicated modules to decrypt and parse these files, similar to existing tools targeting platforms like Chrome or Telegram.

This revelation comes on the heels of other security concerns associated with OpenClaw. The platform’s maintainers recently announced a partnership with VirusTotal to scan for malicious skills uploaded to ClawHub, establish a comprehensive threat model, and introduce features to audit for potential misconfigurations.

In a related development, the OpenSourceMalware team detailed an ongoing campaign involving malicious skills on ClawHub. Threat actors have adopted a new technique to bypass VirusTotal scanning by hosting malware on websites that mimic OpenClaw’s official site. Instead of embedding the payload directly within the SKILL.md files, these skills serve as decoys, directing users to external malicious sites. Security researcher Paul McCarty noted that this shift from embedded payloads to external hosting demonstrates the adaptability of threat actors in response to enhanced detection capabilities.

Further compounding the security challenges, OX Security highlighted issues with Moltbook, a Reddit-like forum designed exclusively for AI agents, primarily those operating on OpenClaw. Their research revealed that once an AI agent account is created on Moltbook, it cannot be deleted. This permanence means users wishing to remove their accounts and associated data have no available recourse, raising significant privacy and data management concerns.

These incidents collectively underscore the pressing need for robust security measures in the rapidly evolving domain of AI assistants. As these agents become more deeply embedded in both personal and professional settings, ensuring their security is paramount to prevent unauthorized access, data breaches, and potential misuse.