Meta Enhances Content Moderation with Advanced AI Systems, Reducing Third-Party Dependence
Meta has announced the deployment of advanced artificial intelligence (AI) systems to enhance content enforcement across its platforms, including Facebook and Instagram. This strategic shift aims to reduce reliance on third-party vendors by automating the detection and removal of content related to terrorism, child exploitation, drug trafficking, fraud, and scams.
The company plans to implement these AI systems across its applications once they consistently outperform existing content enforcement methods. While human reviewers will continue to play a role, the AI systems are designed to handle tasks better suited to technology, such as repetitive reviews of graphic content and adapting to evolving tactics used by malicious actors.
Early testing indicates that these AI systems can detect twice as much adult sexual solicitation content as human review teams, with a 60% reduction in error rates. They are also effective in identifying and preventing impersonation accounts involving celebrities and high-profile individuals, as well as detecting signals indicative of account takeovers, such as logins from new locations or unauthorized profile edits.
Additionally, the AI systems have been successful in identifying and mitigating approximately 5,000 scam attempts daily, where scammers attempt to deceive users into divulging their login credentials.
Despite the increased automation, human oversight remains crucial. Experts will continue to design, train, oversee, and evaluate the AI systems, particularly in making complex, high-impact decisions, such as handling appeals of account disablements or reporting to law enforcement.
This development follows Meta’s recent changes in content moderation policies. In January 2025, the company ended its third-party fact-checking program in favor of a Community Notes model, similar to that used by X.com. This shift was part of a broader move to loosen content moderation rules, including lifting restrictions around topics considered part of mainstream discourse and encouraging a personalized approach to political content.
The introduction of advanced AI systems for content enforcement comes at a time when Meta, along with other major tech companies, faces multiple lawsuits seeking to hold social media platforms accountable for potential harm to children and young users.
In addition to enhancing content enforcement, Meta has launched a Meta AI support assistant, providing users with 24/7 support. This assistant is being rolled out globally on the Facebook and Instagram apps for iOS and Android, as well as within the Help Center on desktop versions of these platforms.