Critical Vulnerability in LangChainGo Exposes Sensitive Server Files Through Malicious Prompt Injection

A critical security flaw has been discovered in LangChainGo, the Go implementation of the widely-used LangChain framework for orchestrating large language models (LLMs). This vulnerability, identified as CVE-2025-9556, enables unauthenticated attackers to read arbitrary files on the server by injecting malicious prompt templates. This exposure allows access to sensitive information without the need for direct system access.

Understanding the Vulnerability

LangChainGo utilizes the Gonja template engine, a Go adaptation of Python’s Jinja2, to parse and render dynamic prompts. Gonja’s compatibility with Jinja2 directives such as `{% include %}`, `{% from %}`, and `{% extends %}` facilitates the use of reusable templates. However, this compatibility also introduces potential security risks when untrusted content is processed.

An attacker can exploit this vulnerability by submitting a prompt containing a payload like:

“`
{% include ‘/etc/passwd’ %}
“`

This directive compels LangChainGo to load and return the contents of the specified file. Due to Gonja’s native processing of Jinja2 syntax, attackers can employ advanced template constructs, such as nested statements or custom macros, to traverse directories or chain multiple file reads within a single injection string.

In LLM chat environments powered by LangChainGo, the only requirement for exploitation is access to the prompt submission interface. This accessibility makes it relatively straightforward for remote threat actors to exploit the vulnerability.

Risk Assessment

The implications of CVE-2025-9556 are severe, with a CVSS 3.1 score of 9.8, categorizing it as a critical vulnerability.

– Affected Products: LangChainGo versions prior to 0.18.2

– Impact: Arbitrary file read leading to potential data breaches

– Exploit Prerequisites: Access to the LLM prompt interface

The vulnerability compromises the confidentiality of server data and undermines the trust model of LLM-based systems. Attackers can access sensitive files such as SSH keys, environment files, API credentials, and other proprietary data stored on the server. With this information, adversaries may elevate privileges, move laterally within the network, or exfiltrate intellectual property. The risk is particularly pronounced in multi-tenant deployments, where a malicious user could access the filesystem resources of another tenant’s instance.

Mitigation Measures

To address this vulnerability, the maintainers of LangChainGo have released a patch introducing a secure `RenderTemplateFS` function. This function enforces a whitelist of permissible template paths and disables arbitrary filesystem access by default. Additionally, the update strengthens template parsing routines to sanitize or reject any prompt containing Jinja2 file-inclusion directives.

Operators of LangChainGo are strongly advised to:

1. Upgrade to Version 0.18.2 or Later: Implement the latest version to benefit from the security enhancements.

2. Audit Prompt-Handling Code: Review and modify any custom template instantiation using `NewTemplate()`, replacing it with the patched `NewSecureTemplate()` API to ensure secure template rendering.

By taking these steps, organizations can mitigate the risks associated with this vulnerability and enhance the security of their LLM-based systems.