X Open-Sources Algorithm Amid Transparency Fine and AI Controversies

X Unveils Open-Source Algorithm Amid Transparency Fine and Grok Controversies

In 2023, the platform formerly known as Twitter took a significant step by partially releasing its algorithm’s code to the public. This move was part of Elon Musk’s broader vision to enhance transparency within the social media giant he had recently acquired. However, the initial release faced criticism for being incomplete and not offering substantial insights into the platform’s operational mechanics.

Fast forward to January 2026, X has once again open-sourced its algorithm, fulfilling a commitment Musk made the previous week. He announced, We will make the new X algorithm, including all code used to determine what organic and advertising posts are recommended to users, open source in 7 days. Additionally, Musk pledged to provide updates on the algorithm’s transparency every four weeks moving forward.

On January 20, 2026, X published a detailed overview of its feed-generating code on GitHub, accompanied by a diagram illustrating the program’s functionality. While the revelations may not be groundbreaking, they do offer a glimpse into the algorithm’s inner workings. The diagram indicates that the algorithm evaluates a user’s engagement history—such as posts they’ve interacted with—and reviews recent posts from accounts they follow. It also employs machine learning to analyze out-of-network content, identifying posts from accounts the user doesn’t follow but might find interesting.

The algorithm filters out specific types of content, including posts from blocked accounts, those associated with muted keywords, and content deemed excessively violent or spam-like. It then ranks the remaining content based on predicted user interest, considering factors like relevance and diversity to prevent monotonous feeds. The system assesses the likelihood of user engagement through actions like likes, replies, reposts, and favorites.

X emphasizes that this system is entirely AI-driven. The GitHub documentation states that the platform relies entirely on its Grok-based transformer to learn relevance from user engagement sequences. This means Grok analyzes user interactions to inform the recommendation system. Notably, there is no manual intervention in determining content relevance, which, according to X, significantly reduces the complexity in our data pipelines and serving infrastructure.

The timing of this open-source release raises questions. Musk has consistently advocated for corporate transparency, aiming to position X as a leader in this area. In 2023, during the initial algorithm release, he acknowledged that providing code transparency might be incredibly embarrassing at first but believed it would lead to rapid improvement in recommendation quality. He added, Most importantly, we hope to earn your trust. The platform heralded this as a new era of transparency for Twitter.

Despite these transparency efforts, some aspects of X have become less open since Musk’s acquisition. Transitioning from a public to a private company often reduces transparency. Previously, Twitter released multiple transparency reports annually, but X didn’t publish its first transparency report until September 2024. In December 2025, the European Union fined X $140 million for violating transparency obligations under the Digital Services Act (DSA). Regulators argued that X’s verification checkmark system complicated users’ ability to assess account authenticity.

Adding to the challenges, X has faced scrutiny over its AI chatbot, Grok. Reports emerged that Grok was used to create and distribute sexualized content, including images of women and minors. The California Attorney General’s office and congressional lawmakers have investigated these claims, expressing concerns about the platform’s role in facilitating such content. In response, X restricted Grok’s image-generation feature to paying subscribers and implemented measures to prevent misuse. However, these actions have not fully mitigated the backlash, with authorities in France, Malaysia, and India also condemning the platform for allowing the creation of sexualized deepfakes.

In light of these controversies, some observers view X’s renewed commitment to transparency as an attempt to rebuild trust and divert attention from ongoing issues. While open-sourcing the algorithm is a step toward openness, it remains to be seen whether this move will address the broader concerns surrounding the platform’s practices and content moderation policies.