X Enforces Strict Measures Against Unlabeled AI-Generated Content on Armed Conflicts
In a decisive move to uphold content authenticity, X, the social media platform formerly known as Twitter, has announced stringent penalties for creators who share AI-generated videos depicting armed conflicts without proper disclosure. Nikita Bier, X’s head of product, revealed that offenders will face a 90-day suspension from the platform’s Creator Revenue Sharing Program. Repeat violations will result in permanent expulsion from the program.
Bier emphasized the importance of accurate information during wartime, stating, During times of war, it is critical that people have access to authentic information on the ground. With today’s AI technologies, it is trivial to create content that can mislead people. He further clarified that users posting AI-generated videos of armed conflicts without appropriate disclosure will be suspended from the revenue-sharing program for 90 days.
To identify such misleading content, X plans to utilize a combination of advanced AI detection tools and its crowdsourced fact-checking system, Community Notes. This dual approach aims to maintain the platform’s integrity by ensuring that users are not misled by AI-generated media.
The Creator Revenue Sharing Program allows users to earn income by sharing in advertising revenue generated from their popular posts. While the program is designed to encourage engaging content, critics argue that it may incentivize sensationalism, including clickbait and outrage-inducing posts. Concerns have also been raised about the program’s content controls and the requirement for creators to be paid subscribers to participate.
The ease with which AI can produce misleading photos and videos presents a significant challenge. X’s decision to penalize creators for undisclosed AI-generated content related to armed conflicts is a step toward mitigating misinformation. However, this policy does not extend to other areas where AI-generated media is used to spread political misinformation or promote deceptive products within the influencer economy.
This move by X reflects a growing awareness and response to the potential dangers posed by AI-generated content in shaping public perception, especially during sensitive times such as armed conflicts. By enforcing these measures, X aims to foster a more transparent and trustworthy environment for its users.