Meta Deploys AI to Detect and Remove Underage Users on Facebook and Instagram

Meta Implements AI to Detect Underage Users on Facebook and Instagram

In a significant move to enhance child safety on its platforms, Meta has announced the deployment of artificial intelligence (AI) technology designed to identify and remove users under the age of 13 from Facebook and Instagram. This initiative involves scanning photos and videos for visual indicators such as height and bone structure to estimate a user’s age.

AI-Powered Age Detection

Meta’s new system utilizes AI to analyze visual cues in user-generated content, focusing on general themes like height and bone structure. The company emphasizes that this approach does not involve facial recognition or the identification of specific individuals. Instead, the AI estimates a user’s general age by assessing these visual elements. By integrating these insights with analyses of text and user interactions, Meta aims to significantly increase the detection and removal of underage accounts.

Global Rollout and Expansion Plans

Currently, this visual analysis system is operational in select countries, with Meta planning a broader rollout in the near future. This initiative is part of Meta’s ongoing efforts to prevent children under 13 from accessing its platforms. The company also employs AI to scrutinize entire profiles for contextual clues, such as birthday celebrations or mentions of school grades, across various formats like posts, comments, bios, and captions. Future plans include extending this technology to other parts of its apps, including Instagram Live and Facebook Groups.

Account Deactivation and Age Verification

When the AI system identifies a user who may be underage, Meta will deactivate their account. To prevent account deletion, the user must verify their age through Meta’s age verification process. This measure underscores Meta’s commitment to maintaining a safe online environment for all users.

Legal Context and Company Response

This announcement follows a recent legal setback for Meta. A New Mexico jury ordered the company to pay $375 million in civil penalties for misleading consumers about the safety of its platforms and putting children at risk. Additionally, Meta was mandated to implement fundamental changes to its platforms. In response, Meta has threatened to shut down its social media services in the state.

Broader Efforts in Teen Safety

In addition to the AI-powered age detection system, Meta is expanding its Teen Accounts feature on Instagram to 27 countries in the European Union and Brazil. These accounts provide a stricter experience with additional safeguards, such as limiting direct messages to people the user follows or is already connected to, hiding harmful comments, and setting accounts to private by default. Furthermore, Meta plans to introduce this technology to Facebook in the U.S., followed by the U.K. and EU in June.

Conclusion

Meta’s implementation of AI to detect underage users represents a significant step in its ongoing efforts to enhance child safety on its platforms. By leveraging advanced technology to identify and remove accounts belonging to users under 13, Meta aims to create a safer online environment for all users. This initiative, coupled with the expansion of Teen Accounts and other safety measures, reflects the company’s commitment to addressing concerns about the safety of minors on social media.