Critical Vulnerabilities in LangChain and LangGraph Expose Sensitive Data
Recent discoveries have unveiled significant security flaws within the LangChain ecosystem, particularly affecting LangChain Core and LangGraph. These vulnerabilities pose serious risks, including unauthorized access to sensitive information and potential manipulation of large language model (LLM) outputs.
LangChain Core Vulnerability:
A critical flaw, identified as CVE-2025-68664 and dubbed LangGrinch, has been found in LangChain Core, a fundamental Python package in the LangChain suite. This package provides essential interfaces and abstractions for developing LLM-powered applications. The vulnerability, carrying a CVSS score of 9.3, was reported by security researcher Yarden Porat on December 4, 2025.
The issue resides in the `dumps()` and `dumpd()` functions, which fail to properly escape dictionaries containing the ‘lc’ key during serialization. This ‘lc’ key is internally used by LangChain to denote serialized objects. Consequently, when user-controlled data includes this key, it is misinterpreted as a legitimate LangChain object upon deserialization.
Exploitation of this flaw can lead to several adverse outcomes:
– Secret Extraction: Attackers can retrieve sensitive information from environment variables, especially when deserialization is performed with the `secrets_from_env=True` setting.
– Unauthorized Class Instantiation: Malicious actors can instantiate classes within trusted namespaces like `langchain_core`, `langchain`, and `langchain_community`, potentially leading to arbitrary code execution.
– Prompt Injection: The vulnerability allows for the injection of LangChain object structures through user-controlled fields such as `metadata`, `additional_kwargs`, or `response_metadata`, enabling manipulation of LLM responses.
To mitigate these risks, LangChain has released patches introducing restrictive defaults in the `load()` and `loads()` functions. These updates include an allowlist parameter, `allowed_objects`, enabling users to specify permissible classes for serialization and deserialization. Additionally, Jinja2 templates are now blocked by default, and the `secrets_from_env` option is set to `False` to prevent automatic loading of secrets from the environment.
The affected versions of `langchain-core` are:
– Versions >= 1.0.0 and < 1.2.5 (Fixed in 1.2.5) - Versions < 0.3.81 (Fixed in 0.3.81) LangGraph Vulnerability: In addition to the LangChain Core issue, a similar serialization injection flaw has been identified in LangGraph, a component within the LangChain ecosystem. This vulnerability arises from improper handling of objects with 'lc' keys, leading to potential secret extraction and prompt injection. The affected versions of LangGraph are: - Versions >= 1.0.0 and < 1.1.8 (Fixed in 1.1.8) - Versions < 0.3.80 (Fixed in 0.3.80) Recommendations: Given the severity of these vulnerabilities, users are strongly advised to update to the patched versions promptly to ensure optimal protection. The primary attack vector involves LLM response fields like `additional_kwargs` or `response_metadata`, which can be manipulated through prompt injection and subsequently serialized and deserialized during streaming operations. This situation underscores the critical intersection of AI and traditional security concerns, highlighting the necessity for organizations to treat LLM outputs as untrusted inputs and implement robust validation mechanisms.