Critical Flaw in Hugging Face’s LeRobot Allows Remote Code Execution; AI Systems at Risk

Critical Vulnerability in Hugging Face’s LeRobot Framework Exposes AI Systems to Unauthenticated Remote Code Execution

A significant security flaw has been identified in LeRobot, Hugging Face’s widely-used open-source machine learning framework designed for real-world robotics applications. This vulnerability, cataloged as CVE-2026-25874 with a critical CVSS score of 9.3, enables unauthenticated attackers to execute arbitrary system commands on affected host machines. Given LeRobot’s substantial presence in the AI community, with nearly 24,000 stars on GitHub, this issue poses a severe risk to AI infrastructures, connected robotic systems, and sensitive proprietary data.

Insecure Pickle Deserialization: A Gateway for Exploitation

The root of this vulnerability lies in the async inference module of LeRobot, which is responsible for offloading intensive computations to GPU servers. Within this module, the PolicyServer and RobotClient components utilize Python’s native `pickle` module to deserialize data transmitted over unauthenticated gRPC channels. The gRPC server’s configuration, specifically the use of `add_insecure_port()` without implementing Transport Layer Security (TLS) or authentication mechanisms, allows any network-accessible entity to connect directly to the service.

Attackers can exploit this setup by sending maliciously crafted serialized payloads through RPC handlers such as `SendPolicyInstructions` or `SendObservations`. These payloads are processed immediately during the `pickle.loads()` operation, leading to the execution of arbitrary code before any data type validation occurs. Notably, this exploitation requires no credentials or complex attack chains, making it alarmingly straightforward.

Potential Impact: From System Compromise to Physical Sabotage

The implications of this vulnerability are far-reaching. AI inference servers typically operate with elevated system privileges to manage GPU resources and large datasets. A successful attack could grant adversaries complete administrative control over the host machine, enabling lateral movement within internal networks, corruption of machine learning models, exfiltration of Hugging Face API keys, and even sabotage of connected robotic operations.

Irony in Security Practices: Ignoring Safer Alternatives

Ironically, Hugging Face had previously developed the `safetensors` format to mitigate the security risks associated with `pickle` serialization. Despite this advancement, LeRobot’s developers opted for the less secure `pickle` format for convenience. Further compounding the issue, security researchers discovered `# nosec` tags adjacent to the `pickle.loads()` calls in the source code. These comments were intentionally placed to suppress automated security linter warnings that had correctly identified the vulnerability during development.

Mitigation Strategies: Immediate Actions Required

A permanent fix is slated for LeRobot version 0.6.0, which will replace `pickle` with `safetensors` and JSON. Until this update is available, organizations are urged to implement the following defensive measures:

– Restrict Network Access: Ensure that the LeRobot async inference server is not exposed to untrusted networks or the public internet.

– Bind to Localhost: Configure the server to bind strictly to localhost rather than `0.0.0.0` to prevent external connection attempts.

– Implement Strong Security Controls: Utilize robust API gateways, VPNs, and network-level firewalls to enforce strict authentication before any traffic reaches the gRPC port.

By proactively addressing these vulnerabilities, organizations can safeguard their AI systems and maintain the integrity of their robotic operations.