ChatGPT Flaw Allowed Silent User Data Theft via DNS Tunneling Exploit

Critical ChatGPT Vulnerability Exposes User Data to Silent Exfiltration

In an era where artificial intelligence (AI) assistants like ChatGPT are integral to daily tasks, users often entrust these platforms with sensitive information, including medical records, financial documents, and proprietary business data. However, a recently discovered vulnerability in ChatGPT’s architecture has raised significant security concerns, as it allowed attackers to silently extract such confidential user data without triggering any alerts or requiring user consent.

Understanding the Vulnerability

The core of this security flaw lies in ChatGPT’s Python-based Data Analysis environment, designed as a secure sandbox to execute code while preventing unauthorized data access. OpenAI implemented safeguards to block direct outbound HTTP requests, aiming to prevent data leakage. Legitimate external API calls, known as GPT Actions, require explicit user approval through visible consent dialogs.

Despite these precautions, researchers identified a method to bypass these outbound safeguards by exploiting DNS tunneling. While conventional internet access was restricted, the environment still permitted standard DNS resolution. Attackers leveraged this oversight by encoding sensitive user data into DNS subdomain labels. Instead of using DNS solely for IP address resolution, the exploit fragmented data—such as medical diagnoses or financial summaries—into smaller chunks. When the system performed a recursive DNS lookup, the resolver chain transmitted the encoded data directly to an attacker-controlled external server. This method effectively bypassed all user mediation, as the system did not recognize DNS traffic as an external data transfer.

Exploitation Through Custom GPTs

The attack required minimal user interaction and could be initiated with a single malicious prompt. Threat actors could distribute these payloads across public forums or social media, disguising them as productivity hacks or methods to unlock premium ChatGPT features. Once a user pasted the prompt into their chat, the current conversation seamlessly became a covert data-collection channel.

Alternatively, attackers could embed the malicious logic directly into Custom GPTs. If a user interacted with a compromised GPT, such as a mock personal doctor analyzing uploaded medical PDFs, the system secretly extracted high-value identifiers and assessments. Since GPT developers officially lack access to individual user chat logs, this side channel provided a stealthy mechanism to harvest private workflows. When questioned directly, the AI would confidently deny sending data externally, maintaining the illusion of privacy.

Implications and Risks

The vulnerability extended beyond passive data theft, offering a bidirectional communication channel between the runtime and the attacker. Threat actors could encode command fragments into DNS responses, sending raw instructions back into the isolated sandbox. A process running inside the container could reassemble these payloads and execute them, effectively granting the attacker a remote shell inside the Linux environment. This execution bypassed standard safety mechanisms, with commands and results remaining invisible in the chat interface, leaving users completely unaware of the compromise.

OpenAI’s Response and Mitigation

Upon discovery, OpenAI acted promptly to address the issue. The underlying vulnerability was successfully patched on February 20, 2026, effectively closing the DNS tunnel and mitigating the risk of data exfiltration through this method. This incident underscores the importance of continuous security assessments and the need for robust safeguards in AI platforms to protect user data.

Broader Context of AI Vulnerabilities

This incident is not isolated. Several vulnerabilities have been identified in AI platforms, highlighting the evolving threat landscape:

– Malicious SVGs and Images in Shared Chats: A critical vulnerability allowed attackers to embed malicious SVG and image files directly into shared ChatGPT conversations, potentially exposing users to sophisticated phishing attacks and harmful content. The flaw, documented as CVE-2025-43714, affected the ChatGPT system through March 30, 2025. Security researchers found that instead of rendering SVG code as text within code blocks, ChatGPT inappropriately executed these elements when a chat was reopened or shared through public links, creating a stored cross-site scripting (XSS) vulnerability. ([cybersecuritynews.com](https://cybersecuritynews.com/chatgpt-vulnerability-malicious-images/?utm_source=openai))

– Atlas Browser Vulnerability: OpenAI’s ChatGPT Atlas browser contained a critical vulnerability that enabled attackers to inject malicious instructions into ChatGPT’s memory and execute remote code on user systems. This flaw exploited Cross-Site Request Forgery (CSRF) to hijack authenticated sessions, potentially infecting devices with malware or granting unauthorized access. The discovery highlighted escalating risks in agentic AI browsers, where integrated large language models (LLMs) amplify traditional web threats. ([cybersecuritynews.com](https://cybersecuritynews.com/openai-atlas-browser-vulnerability/?utm_source=openai))

– Unencrypted OAuth Tokens in ChatGPT Atlas: A significant vulnerability in OpenAI’s ChatGPT Atlas browser revealed that it stored unencrypted OAuth tokens in a SQLite database with overly permissive file settings on macOS. This flaw potentially allowed unauthorized access to user accounts, bypassing standard encryption practices used by major browsers like Chrome and leaving sensitive authentication data exposed to any process on the system. ([cybersecuritynews.com](https://cybersecuritynews.com/chatgpt-atlas-exposes-users/?utm_source=openai))

– Data Exfiltration from Connected Services: Critical vulnerabilities in ChatGPT allowed attackers to exfiltrate sensitive data from connected services like Gmail, Outlook, and GitHub without user interaction. Dubbed ShadowLeak and ZombieAgent, these flaws exploited the AI’s Connectors and Memory features for zero-click attacks, persistence, and even propagation. OpenAI’s Connectors enable ChatGPT to integrate with external systems, granting broad access to personal and corporate data, amplifying risks from inadequate safeguards. ([cybersecuritynews.com](https://cybersecuritynews.com/chatgpt-vulnerabilities-expose-sensitive-data/?utm_source=openai))

– DDoS Attacks via ChatGPT Crawler: A significant crawler vulnerability in OpenAI’s ChatGPT API enabled attackers to launch Distributed Denial of Service (DDoS) attacks on arbitrary websites. The flaw lay within the ChatGPT API’s handling of HTTP POST requests, allowing attackers to craft malicious requests that led to thousands of connections directed at a specific website, thereby reducing its availability. ([cybersecuritynews.com](https://cybersecuritynews.com/chatgpt-crawler-vulnerability/?utm_source=openai))

– Exploitation of SSRF Vulnerability: Attackers actively exploited a Server-Side Request Forgery (SSRF) vulnerability in OpenAI’s ChatGPT infrastructure, identified as CVE-2024-27564. This vulnerability allowed attackers to inject malicious URLs into input parameters, forcing ChatGPT’s application to make unintended requests on the attacker’s behalf. The exploitation trend showed a surge in January 2025, followed by a decrease in subsequent months, possibly indicating attackers’ shifting tactics or response to security measures. ([cybersecuritynews.com](https://cybersecuritynews.com/chatgpt-vulnerability-actively-exploited/?utm_source=openai))

– Man-in-the-Prompt Attacks: A critical vulnerability affecting popular AI tools, including ChatGPT and Google Gemini, exposed them to a novel attack vector dubbed Man-in-the-Prompt. Malicious browser extensions could exploit the Document Object Model (DOM) to inject prompts, steal sensitive data, and manipulate AI responses without requiring special permissions. This vulnerability affected billions of users across major platforms, with ChatGPT’s 5 billion monthly visits and Gemini’s 400 million users particularly at risk. ([cybersecuritynews.com](https://cybersecuritynews.com/man-in-the-prompt-attack/?utm_source=openai))

– Zero-Click Vulnerability in Connectors: A critical vulnerability in OpenAI’s ChatGPT Connectors feature allowed attackers to exfiltrate sensitive data from connected Google Drive accounts without any user interaction beyond the initial file sharing. Dubbed AgentFlayer, this attack represented a new class of zero-click exploits targeting AI-powered enterprise tools. The vulnerability was disclosed by cybersecurity researchers at the Black Hat hacker conference, demonstrating how a single malicious document could trigger automatic data theft from victims’ cloud storage accounts. ([cybersecuritynews.com](https://cybersecuritynews.com/chatgpt-0-click-connectors-vulnerability/?utm_source=openai))

– Jailbreak Vulnerability Time Bandit: A new jailbreak vulnerability in OpenAI’s ChatGPT-4o, dubbed Time Bandit, was exploited to bypass the chatbot’s built-in safety functions. This vulnerability allowed attackers to manipulate the chatbot into producing illicit or dangerous content, including instructions for malware creation, phishing scams, and other malicious activities. The exploitation of this jailbreak raised alarms within the cybersecurity community due to its potential to be scaled up for malign purposes by threat actors. ([cybersecuritynews.com](https://cybersecuritynews.com/chatgpt-4o-jailbreak-vulnerability/?utm_source=openai))

– SSRF Vulnerability in Custom GPTs: A Server-Side Request Forgery (SSRF) vulnerability in OpenAI’s ChatGPT was discovered in the Custom GPT Actions feature. This flaw allowed attackers to trick the system into accessing internal cloud metadata, potentially exposing sensitive Azure credentials. The bug highlighted the risks of user-controlled URL handling in AI tools. ([cybersecuritynews.com](https://cybersecuritynews.com/chatgpt-hacked-using-custom-gpts/?utm_source=openai))

Conclusion

The recent discovery of a vulnerability in ChatGPT’s architecture, allowing silent exfiltration of user data, underscores the critical need for robust security measures in AI platforms. As AI assistants become increasingly integrated into personal and professional workflows, ensuring the confidentiality and integrity of user data is paramount. Continuous security assessments, prompt vulnerability patching, and user education are essential to mitigate risks and maintain trust in these technologies.