xAI’s Strategic Shift: From AI Development to Compute Provider
In a significant development within the artificial intelligence sector, xAI, the AI company founded by Elon Musk, has entered into a partnership with Anthropic, the creators of the Claude AI model. This collaboration involves Anthropic acquiring the entire compute capacity of xAI’s Colossus 1 data center, amounting to approximately 300 megawatts. This move has enabled Anthropic to immediately enhance its usage limits, marking a substantial financial gain for xAI, potentially valued in the billions. More critically, it signifies a pivotal transformation for xAI, transitioning from a consumer to a provider of computational resources.
This partnership has sparked discussions about xAI’s strategic direction, especially in light of its ongoing legal disputes with OpenAI. However, Musk clarified on X (formerly Twitter) that xAI had already migrated its training operations to a newer facility, Colossus 2, rendering the simultaneous operation of both data centers unnecessary.
In the immediate term, this decision appears logical. xAI’s primary product, Grok, has experienced a decline in usage following earlier controversies related to its image generation capabilities. With an infrastructure exceeding Grok’s operational requirements, leasing excess capacity to Anthropic not only bolsters xAI’s financial standing but also aligns with its impending initial public offering (IPO). Furthermore, securing Anthropic as a client lends credibility to SpaceX’s ambitious plans for orbital data centers, suggesting that such ventures could be viable.
Beyond these immediate benefits, the partnership with Anthropic raises questions about Elon Musk’s long-term priorities. It suggests that xAI’s core business may be shifting from developing AI models to constructing and managing data centers.
This approach contrasts with strategies employed by other tech giants. Companies like Google and Meta are expanding their data center capacities to support their own AI model training. When faced with the choice between allocating compute resources to external clients or preserving them for internal development, these companies typically prioritize their own AI initiatives.
For instance, Sundar Pichai, CEO of Google, recently acknowledged that Google Cloud’s revenue was constrained due to limited capacity. Given the option to rent out GPUs or utilize them for internal AI product development, Google chose the latter.
Similarly, Meta has undertaken significant efforts to ensure sufficient GPU power for its AI ambitions. In January, CEO Mark Zuckerberg announced the creation of Meta Compute, emphasizing that the engineering, investment, and partnerships involved in building this infrastructure would serve as a strategic advantage.
The term strategic is key here. Both Zuckerberg and Pichai are looking toward a future where AI powers the most popular and lucrative systems globally. Computing power is not just a means to meet current demand but a foundation for developing future products. A shortage of compute resources could mean missing out on these opportunities.
By focusing on data centers, both terrestrial and potentially orbital, xAI is positioning itself more like a neocloud business. This involves purchasing GPUs from suppliers like Nvidia and renting them out to model developers such as Anthropic. This business model is challenging, squeezed by both chip suppliers and fluctuating demand cycles. The valuations of active neocloud companies reflect this reality. For example, xAI was valued at $230 billion in its January funding round, while CoreWeave, which manages a comparable amount of computing power, is valued at less than a third of that.
Musk’s vision for a neocloud is notably ambitious. Some data centers might be located in space by 2035, if plans proceed as intended. xAI also plans to manufacture its own chips at the Terafab facility, potentially reducing Nvidia’s pricing power. However, these initiatives do not fundamentally alter the economics of the neocloud business.
As recently as February, xAI had significant ambitions in software development. During an all-hands meeting, the company unveiled the orbital data center project and discussed plans in coding, including a partnership with Cursor, and concepts like creating full-scale digital twins through the Macrohard project. These long-term projects require dedicated computing resources to succeed. As long as xAI continues to sell substantial compute capacity to competitors, the future of such ambitious projects remains uncertain.