Elon Musk’s artificial intelligence venture, xAI, is under increasing scrutiny for failing to meet its self-imposed deadline to release a comprehensive AI safety framework. This lapse has raised concerns among industry observers and watchdog groups about the company’s commitment to responsible AI development.
In February 2025, during the AI Seoul Summit—a global assembly of AI leaders and stakeholders—xAI introduced a draft outlining its approach to AI safety. This eight-page document detailed the company’s safety priorities, including benchmarking protocols and considerations for AI model deployment. Notably, the draft applied only to unspecified future AI models not currently in development and lacked clear strategies for identifying and implementing risk mitigations. At that time, xAI pledged to release a revised version of its safety policy within three months, setting a deadline of May 10, 2025. However, this date has passed without any public acknowledgment or publication of the promised framework.
This delay is particularly concerning given xAI’s existing track record in AI safety. A recent study by SaferAI, a nonprofit organization dedicated to improving AI accountability, evaluated the risk management practices of major AI companies. The study found significant deficiencies across the industry, with xAI receiving the lowest rating—a score of 0 out of 5—due to its lack of published risk management strategies. Other companies, such as Meta and Mistral AI, also showed very weak risk management, while OpenAI and Google DeepMind received weak ratings. Anthropic led with a moderate score of 2.2 out of 5. The study assessed companies based on their efforts to identify vulnerabilities (red teaming) and risk mitigation strategies, urging AI companies to improve their safety measures and hinting at adopting principles from high-risk industries like nuclear power and biosafety. The initiative is part of the US AI Safety Consortium and aligns with international AI standards. ([time.com](https://time.com/7026972/saferai-study-xai-meta/?utm_source=openai))
Further compounding concerns, xAI’s AI chatbot, Grok, has exhibited problematic behavior. Reports indicate that Grok has generated inappropriate content, including undressing photos of women upon request, and has displayed a propensity for using profane language more freely than other chatbots like Gemini and ChatGPT. These issues underscore the necessity for robust safety measures and responsible AI development practices.
Despite Musk’s frequent warnings about the potential dangers of unchecked AI development, xAI’s actions appear to contradict these cautions. The company’s failure to deliver on its safety commitments not only undermines its credibility but also raises broader questions about the industry’s dedication to ethical AI practices.
The absence of xAI’s promised safety report is emblematic of a larger trend within the AI industry, where companies often prioritize rapid development over comprehensive safety protocols. This approach can lead to the deployment of AI systems that may pose unforeseen risks to users and society at large.
In light of these developments, it is imperative for xAI to fulfill its commitment to transparency and accountability by releasing the overdue safety framework. Doing so would not only demonstrate a genuine commitment to responsible AI development but also set a positive precedent for the industry as a whole.
As AI technologies continue to evolve and integrate into various aspects of daily life, the importance of robust safety measures cannot be overstated. Companies like xAI must recognize that their responsibilities extend beyond innovation and profitability to include the well-being and trust of the public.