In a significant move to bolster child safety on its platform, Meta has announced the implementation of enhanced safeguards for Instagram accounts predominantly featuring children. This initiative, unveiled on July 23, 2025, aims to mitigate potential risks associated with such accounts by introducing stricter messaging settings and content filters.
Enhanced Messaging and Content Controls
Accounts managed by adults that regularly showcase photos and videos of children will now be automatically placed into Instagram’s most stringent messaging settings. This adjustment is designed to prevent unsolicited messages from unknown users. Additionally, the platform’s Hidden Words feature will be activated by default, effectively filtering out offensive comments and messages. These measures target accounts operated by parents, guardians, or talent managers representing children.
Meta acknowledges that while the majority of these accounts are used for benign purposes, there exists a subset of individuals who exploit them by leaving inappropriate comments or soliciting explicit content, actions that are clear violations of the platform’s community guidelines. To combat this, Meta is implementing proactive steps to prevent such abuse.
Restricting Access for Suspicious Users
To further safeguard child-focused accounts, Meta will attempt to prevent potentially suspicious adults—such as those previously blocked by teen users—from discovering these accounts. This will be achieved by avoiding the recommendation of such accounts to suspicious users and vice versa, as well as making it more challenging for them to appear in Instagram’s search results.
Context and Background
This announcement comes in the wake of heightened scrutiny over social media’s impact on mental health, particularly among younger users. Over the past year, Meta and Instagram have undertaken various initiatives to address these concerns. Notably, the U.S. Surgeon General and several states have raised alarms about the potential harms of social media, with some jurisdictions even requiring parental consent for minors to access these platforms.
The new measures are expected to significantly affect family vloggers and parents managing accounts for kidfluencers. These groups have faced criticism for exposing children to potential risks by sharing their lives publicly on social media. A New York Times investigation revealed that some parents are aware of, or even participate in, the exploitation of their children by monetizing their online presence. The study examined 5,000 parent-run accounts and found 32 million connections to male followers, highlighting the scale of the issue.
Implementation and Enforcement
Accounts subjected to these new safety settings will receive a notification at the top of their Instagram feed, informing them of the updated safety measures and prompting a review of their privacy settings. Meta has reported the removal of nearly 135,000 Instagram accounts that were found to be sexualizing child-focused accounts, along with an additional 500,000 associated accounts across Instagram and Facebook.
Additional Safety Features for Teen Accounts
In conjunction with these measures, Meta is introducing new safety features for teen accounts. These enhancements include safety tips that encourage teens to scrutinize profiles carefully and be mindful of the information they share. Furthermore, the month and year when an account joined Instagram will now be displayed at the top of new chats, providing additional context. A new combined block and report option has also been added, allowing users to perform both actions simultaneously.
These features are designed to provide teens with more context about the accounts they interact with and to help them identify potential scammers. Meta reports that in June alone, teens blocked accounts 1 million times and reported another 1 million after receiving safety notices.
Ongoing Efforts and Future Directions
Meta’s commitment to enhancing user safety is evident in its continuous efforts to develop and implement protective measures. The company notes that 99% of users, including teens, have kept the nudity protection filter enabled. In the previous month, over 40% of blurred images received in direct messages remained blurred, indicating user trust in these safety features.
These initiatives reflect Meta’s proactive approach to creating a safer online environment for children and teens, addressing both current challenges and anticipating future risks associated with social media usage.