Lawsuit Targets Deepfake App ClothOff Amid Global Challenges in Curbing Non-Consensual AI Pornography

Navigating the Legal Maze: The Uphill Battle Against Deepfake Pornography

In recent years, the proliferation of deepfake technology has introduced a new dimension to online harassment, particularly affecting young women. A notable example is the app ClothOff, which has been operational for over two years, enabling users to create explicit, AI-generated images without consent. Despite its removal from major app stores and bans on various social platforms, ClothOff remains accessible via the web and through a Telegram bot, underscoring the challenges in curbing such platforms.

In October 2025, Yale Law School’s legal clinic initiated a lawsuit aiming to dismantle ClothOff entirely, seeking the deletion of all images and cessation of its operations. However, the lawsuit has encountered significant obstacles, primarily due to the elusive nature of the app’s proprietors. Professor John Langford, co-lead counsel in the case, revealed that while ClothOff is incorporated in the British Virgin Islands, it is believed to be managed by siblings in Belarus, potentially as part of a more extensive global network.

This case highlights the broader issue of non-consensual pornography generated by AI, a problem exacerbated by platforms like Elon Musk’s xAI, which has been implicated in producing such content, including material involving underage individuals. Child sexual abuse material (CSAM) is unequivocally illegal to produce, distribute, or possess, with stringent monitoring by major cloud services. Yet, addressing platforms like ClothOff proves challenging. While individual users can face prosecution, the platforms themselves often evade accountability, leaving victims with limited legal recourse.

The legal complaint filed by Yale’s clinic presents a disturbing scenario. The plaintiff, an anonymous high school student from New Jersey, discovered that her Instagram photos had been altered using ClothOff by her classmates. At the time the original photos were taken, she was 14 years old, rendering the AI-modified images as CSAM under the law. Despite the clear illegality, local authorities declined to prosecute, citing challenges in obtaining evidence from the suspects’ devices. The complaint notes that neither the school nor law enforcement could determine the extent of distribution of the CSAM involving Jane Doe and other girls.

Progress in the lawsuit has been sluggish. Since filing the complaint, Langford and his team have struggled to serve notice to the defendants, a task complicated by the international scope of the operation. Only after successful service can the clinic push for a court appearance and, eventually, a judgment. In the interim, victims of ClothOff find little solace in the current legal framework.

The situation with xAI’s Grok presents a different set of challenges. Unlike ClothOff, xAI operates openly, with substantial financial resources that could be targeted in legal claims. However, Grok is a general-purpose tool, complicating efforts to hold it accountable in court. Langford points out that while ClothOff is explicitly designed for creating deepfake pornography, making its intent clear, Grok’s broader functionality muddies the waters. Existing laws require demonstrable intent to harm, necessitating evidence that xAI was aware of and disregarded the misuse of their tool for producing non-consensual pornography. Without such evidence, xAI’s First Amendment rights offer significant legal protection.

The First Amendment issues are why xAI’s biggest pushback has come from court systems without robust legal protections for free speech. Both Indonesia and Malaysia have taken steps to block access to the Grok chatbot, while regulators in the United Kingdom have opened an investigation that could lead to a similar ban. Other preliminary steps have been taken by the European Commission, France, Ireland, India, and Brazil. In contrast, no U.S. regulatory agency has issued an official response.

The challenges in these cases underscore the complexities of addressing deepfake pornography through the legal system. The global and anonymous nature of platforms like ClothOff, combined with the dual-use capabilities of tools like Grok, complicate efforts to hold creators and distributors accountable. As Langford notes, the key questions revolve around the knowledge and actions of these platforms: What did they know? What did they do or fail to do? And what measures are they currently implementing in response?

In the absence of clear legal pathways, victims of deepfake pornography are left navigating a labyrinthine system that often fails to provide justice. The need for comprehensive legislation and international cooperation is evident to effectively combat the proliferation of non-consensual, AI-generated explicit content.