Critical Vulnerabilities in PickleScan Expose AI Models to Arbitrary Code Execution
Recent discoveries have unveiled multiple critical zero-day vulnerabilities in PickleScan, an open-source tool widely utilized for scanning machine learning models to detect malicious code. Prominent platforms, including Hugging Face, rely on PickleScan to scrutinize PyTorch models saved in Python’s pickle format.
The pickle module in Python offers flexibility but poses significant security risks, as loading a pickle file can execute arbitrary Python code. This capability allows model files to clandestinely incorporate commands that could steal data, install backdoors, or compromise entire systems.
Exploitation of Malicious PyTorch Models
Security researchers at JFrog have identified that attackers can exploit these vulnerabilities to circumvent PickleScan’s security measures, enabling the execution of malicious code when the model is loaded in PyTorch.
The first vulnerability, designated as CVE-2025-10155, allows attackers to evade detection by altering the file extension. By renaming a malicious pickle file to extensions commonly associated with PyTorch, such as .bin or .pt, PickleScan may fail to analyze the content, while PyTorch continues to load and execute the file.
The second vulnerability, CVE-2025-10156, involves manipulating ZIP archive handling by corrupting the CRC (Cyclic Redundancy Check) values within a ZIP file. This manipulation can cause PickleScan to crash or fail, yet PyTorch may still load the model from the compromised archive, creating a blind spot where malware can reside undetected.
The third vulnerability, CVE-2025-10157, targets PickleScan’s blocklist of unsafe modules by utilizing subclasses or internal imports of hazardous modules like asyncio. This technique allows attackers to bypass the Dangerous classification, being marked only as Suspicious, despite the potential for arbitrary command execution.
Given that numerous platforms and organizations depend on PickleScan as a primary defense mechanism, these vulnerabilities present a significant supply chain risk for AI models.
Mitigation and Recommendations
JFrog’s team reported these vulnerabilities to the PickleScan maintainer on June 29, 2025. The issues were addressed in version 0.0.31, released on September 2, 2025. Users are strongly advised to upgrade to this latest version immediately.
To enhance security, it is recommended to avoid using unsafe pickle-based models when possible. Implementing layered defenses, such as sandboxes, adopting safer formats like Safetensors, and utilizing secure model repositories, can further mitigate risks associated with these vulnerabilities.