Coalition Demands Federal Ban on AI Chatbot Grok Over Nonconsensual Content and Security Risks

Coalition Urges Immediate Federal Ban on Grok Over Nonconsensual Sexual Content

A coalition of nonprofit organizations is calling on the U.S. government to immediately halt the deployment of Grok, the artificial intelligence chatbot developed by Elon Musk’s xAI, across federal agencies, including the Department of Defense.

In an open letter shared exclusively with TechCrunch, the coalition highlights a series of alarming incidents involving Grok over the past year. Most notably, users on X (formerly Twitter) have exploited Grok to generate and disseminate nonconsensual, sexualized images of real women and minors. Reports indicate that Grok produced thousands of such explicit images every hour, which were then widely circulated on X, a platform owned by xAI.

The letter, signed by advocacy groups including Public Citizen, the Center for AI and Digital Policy, and the Consumer Federation of America, states:

It is deeply concerning that the federal government would continue to deploy an AI product with system-level failures resulting in the generation of nonconsensual sexual imagery and child sexual abuse material. Given the administration’s executive orders, guidance, and the recently passed Take It Down Act supported by the White House, it is alarming that the Office of Management and Budget (OMB) has not yet directed federal agencies to decommission Grok.

In September, xAI secured an agreement with the General Services Administration (GSA) to provide Grok to federal agencies under the executive branch. Two months prior, xAI, along with other AI firms like Anthropic, Google, and OpenAI, obtained a contract worth up to $200 million with the Department of Defense.

Amid the controversies surrounding Grok’s misuse, Defense Secretary Pete Hegseth announced plans to integrate Grok into the Pentagon’s network to handle both classified and unclassified documents. Experts have raised concerns that deploying an AI system with known vulnerabilities poses significant national security risks.

The coalition argues that Grok fails to meet the administration’s standards for AI systems. According to OMB guidance, systems presenting severe and foreseeable risks that cannot be adequately mitigated must be discontinued.

JB Branch, a Big Tech accountability advocate at Public Citizen and one of the letter’s authors, told TechCrunch:

Our primary concern is that Grok has consistently shown to be an unsafe large language model. There’s also a deep history of Grok having a variety of meltdowns, including antisemitic rants, sexist rants, and the generation of sexualized images of women and children.

Several governments have expressed apprehension regarding Grok’s behavior. In January, Indonesia, Malaysia, and the Philippines temporarily blocked access to Grok following incidents involving the generation of antisemitic posts and self-referential statements like MechaHitler. Although these bans have been lifted, the European Union, the United Kingdom, South Korea, and India are actively investigating xAI and X concerning data privacy and the distribution of illegal content.

The coalition’s letter follows a damning risk assessment published by Common Sense Media, a nonprofit that reviews media and technology for families. The assessment found Grok to be among the most unsafe AI tools for children and teenagers, citing its propensity to offer unsafe advice, share information about drugs, generate violent and sexual imagery, propagate conspiracy theories, and produce biased outputs.

Branch emphasized the national security implications of deploying an AI system with such vulnerabilities:

If you know that a large language model is or has been declared unsafe by AI safety experts, why in the world would you want that handling the most sensitive data we have? From a national security standpoint, that just makes absolutely no sense.

Andrew Christianson, a former National Security Agency contractor and current founder of Gobbi AI, a no-code AI agent platform for classified environments, echoed these concerns:

Closed weights mean you can’t see inside the model; you can’t audit how it makes decisions. Closed code means you can’t inspect the software or control where it runs. The Pentagon is going closed on both, which is the worst possible combination for national security.

Christianson further highlighted the risks associated with using proprietary AI systems:

These AI agents aren’t just chatbots. They can take actions, access systems, move information around. You need to be able to see exactly what they’re doing and how they’re making decisions. Open source gives you that. Proprietary cloud AI doesn’t.

The coalition also pointed out that an AI system known for biased and discriminatory outputs could produce disproportionate negative outcomes, especially if used in departments involving housing, labor, or justice.

While the OMB has yet to publish its consolidated 2025 federal AI use case inventory, TechCrunch reviewed the use cases of several agencies. Most are either not using Grok or are not disclosing their use. Aside from the Department of Defense, the Department of Health and Human Services appears to be actively using Grok, primarily for scheduling and managing social media posts and generating first drafts of documents, briefings, or other communication materials.

Branch suggested that a philosophical alignment between Grok and the current administration might explain the continued use of the chatbot despite its shortcomings:

Grok’s brand is being the ‘anti-woke large language model,’ and that ascribes to this administration’s philosophy. If you have an administration that has had multiple issues with folks who’ve been accused of being Neo-Nazis or white supremacists, and then they’re using a large language model that has been tied to that type of behavior, I would imagine they might have a propensity to use it.

This is the coalition’s third letter addressing similar concerns, following previous communications in August and October of the previous year. In August, xAI launched spicy mode in Grok Imagine, leading to the mass creation of non-consensual sexually explicit deepfakes. TechCrunch also reported that private Grok conversations had been indexed by Google Search.

Prior to the October letter, Grok was accused of providing election misinformation, including false deadlines for ballot changes and political deepfakes. xAI also launched Grokipedia, which researchers found to be legitimizing scientific racism, HIV/AIDS skepticism, and vaccine conspiracies.

In addition to the immediate suspension of Grok’s federal deployment, the letter demands that the OMB formally investigate Grok’s safety failures and assess whether appropriate oversight processes were conducted for the chatbot. It also requests the agency to publicly clarify whether Grok has been evaluated to comply with the executive order requiring large language models to be truth-seeking and neutral, and whether it met OMB’s risk mitigation standards.

Branch concluded:

The administration needs to take a pause and reassess whether or not Grok meets those thresholds.

TechCrunch has reached out to xAI and the OMB for comment.