OpenAI Addresses ChatGPT Data Exposure: Insights and Implications
In March 2023, OpenAI encountered a significant security incident involving its ChatGPT service, where a bug in the Redis open-source library led to the unintended exposure of user data. This breach allowed certain users to view brief descriptions of other users’ conversations from the chat history sidebar. In some cases, the first message of a newly created conversation was visible to another user if both were active simultaneously. OpenAI promptly responded by temporarily shutting down the chatbot to address the issue.
Root Cause Analysis
The vulnerability originated from a flaw in the `redis-py` library, which caused canceled requests to corrupt connections, returning unexpected data from the database cache. This flaw resulted in users receiving information belonging to unrelated individuals. Compounding the problem, a server-side change introduced by OpenAI inadvertently increased request cancellations, thereby elevating the error rate.
Extent of the Exposure
Beyond chat histories, the incident potentially revealed payment-related information of approximately 1.2% of ChatGPT Plus subscribers on March 20, 2023, between 1-10 a.m. PT. Exposed data included users’ first and last names, email addresses, payment addresses, the last four digits of credit card numbers, and credit card expiration dates. Notably, full credit card numbers remained secure. OpenAI has since reached out to affected users to inform them of the inadvertent leak.
Mitigation Measures
In response to the incident, OpenAI implemented redundant checks to ensure that data returned by the Redis cache matches the requesting user. This measure aims to prevent similar occurrences in the future.
Broader Security Concerns
This incident is part of a series of security challenges faced by OpenAI. In June 2025, the company banned ChatGPT accounts operated by Russian, Iranian, and Chinese hacker groups. These groups misused the platform for activities such as malware development, social media automation, and research into U.S. satellite communications technologies. The Russian-speaking actor, for instance, utilized ChatGPT to refine Windows malware and set up command-and-control infrastructure. They employed temporary email accounts to sign up for ChatGPT, using each account for a single conversation to incrementally improve their malicious software before abandoning it. This method highlights the adversary’s focus on operational security.
Vulnerabilities in AI Platforms
In November 2025, researchers identified vulnerabilities in OpenAI’s GPT-4o and GPT-5 models that could be exploited to steal personal information from users’ memories and chat histories. These vulnerabilities exposed the AI system to indirect prompt injection attacks, allowing attackers to manipulate the behavior of large language models (LLMs) and trick them into performing unintended or malicious actions. Techniques included embedding malicious instructions in web pages or search results, leading the LLM to execute them when summarizing content. Such attacks underscore the need for robust security measures in AI platforms to prevent data breaches and unauthorized access.
Malicious Browser Extensions
In January 2026, cybersecurity researchers discovered two malicious Chrome extensions designed to exfiltrate ChatGPT and DeepSeek conversations, along with browsing data, to servers controlled by attackers. Collectively, these extensions had over 900,000 users. Once installed, they requested permissions to collect anonymized browser behavior but instead harvested information about open browser tabs and chatbot conversation data. This data could be weaponized for corporate espionage, identity theft, targeted phishing campaigns, or sold on underground forums. The incident highlights the importance of scrutinizing browser extensions and the potential risks they pose to user privacy.
Persistent Hidden Commands Exploit
In October 2025, a new vulnerability in OpenAI’s ChatGPT Atlas web browser was discovered, allowing malicious actors to inject nefarious instructions into the AI assistant’s memory and run arbitrary code. This exploit leveraged a cross-site request forgery (CSRF) flaw to inject malicious instructions into ChatGPT’s persistent memory. The corrupted memory could persist across devices and sessions, permitting an attacker to conduct various actions, including seizing control of a user’s account, browser, or connected systems. The attack underscores the need for robust anti-phishing controls and secure memory management in AI platforms.
Misuse of AI for Surveillance and Influence Campaigns
In February 2025, OpenAI revealed that it banned accounts that used ChatGPT to develop a suspected AI-powered surveillance tool. The tool, likely originating from China, was designed to collect real-time data and reports about anti-China protests in the West and share insights with Chinese authorities. This misuse of AI for surveillance and influence campaigns raises ethical and security concerns, highlighting the need for vigilant monitoring and regulation of AI applications.
Data Protection Measures
In response to growing concerns over data protection, OpenAI launched ChatGPT Health in January 2026. This dedicated space allows users to have conversations with the chatbot about their health, offering the optional ability to securely connect medical records and wellness apps. ChatGPT Health builds on strong privacy, security, and data controls, including purpose-built encryption and isolation to keep health conversations protected and compartmentalized. This initiative reflects OpenAI’s commitment to enhancing data security and user privacy in its AI offerings.
Regulatory Actions
In April 2023, the Italian data protection watchdog, Garante per la Protezione dei Dati Personali, imposed a temporary ban on OpenAI’s ChatGPT service in the country, citing data protection concerns. The authority ordered the company to stop processing users’ data with immediate effect, stating it intended to investigate whether OpenAI was unlawfully processing data in violation of the EU General Data Protection Regulation (GDPR) laws. This regulatory action underscores the importance of compliance with data protection laws and the need for transparency in AI data processing practices.
Conclusion
The series of security incidents and vulnerabilities associated with ChatGPT highlight the complex challenges in safeguarding user data within AI platforms. OpenAI’s proactive measures, including patching vulnerabilities, banning malicious accounts, and enhancing data protection features, demonstrate a commitment to user privacy and security. However, these events serve as a reminder of the ongoing need for vigilance, robust security protocols, and regulatory compliance in the rapidly evolving landscape of artificial intelligence.