Massive Privacy Breach: Over 370,000 Grok AI Chats Publicly Exposed Without User Consent

In a significant privacy lapse, more than 370,000 conversations conducted through Grok, the AI chatbot developed by Elon Musk’s xAI company, have been inadvertently made public. These interactions, along with user-uploaded photos, spreadsheets, and other documents, were accessible on the Grok website and indexed by search engines, exposing them to the general public.

Grok offers a feature that allows users to share their conversations via a unique URL. However, these shared links were not restricted to intended recipients; instead, they were accessible to search engines, effectively making private chats publicly available. Users were not provided with any warnings or disclaimers indicating that their shared content would be publicly accessible.

The exposed conversations encompass a wide range of sensitive topics, including personal medical and psychological inquiries. Some chats even contained personal details, passwords, and other confidential information. Additionally, users uploaded various files such as images, spreadsheets, and text documents, all of which became publicly accessible due to this oversight.

Alarmingly, some of the exposed content reveals that Grok provided users with information on prohibited subjects, including instructions for manufacturing illicit drugs like fentanyl and methamphetamine, coding self-executing malware, constructing explosive devices, and methods of suicide. In one instance, Grok even detailed a plan for the assassination of Elon Musk.

This incident is reminiscent of a previous occurrence where ChatGPT transcripts appeared in Google search results. In that case, users had agreed to make their conversations discoverable, and the company quickly discontinued the practice, describing it as a short-lived experiment.

The Grok data exposure is particularly ironic given that Elon Musk previously made unsubstantiated privacy claims against the partnership between Apple and OpenAI.