Global Rollout of Internet Age Verification Laws Sparks Privacy Concerns and Industry Debate

The implementation of stringent internet age verification laws in the United Kingdom and the United States has ignited a global conversation about online safety, privacy, and the responsibilities of tech companies. These legislative measures aim to protect minors from accessing inappropriate content but have raised significant concerns regarding user privacy and the practicality of enforcement.

United Kingdom’s Online Safety Act

On July 25, 2025, the United Kingdom’s Online Safety Act (OSA) came into effect, mandating that all websites and applications offering adult content, as well as certain gaming, social media, and dating platforms, implement robust age verification mechanisms to ensure users are 18 or older. Traditional confirmation methods, such as simple checkboxes, are deemed insufficient under this law. Instead, platforms are required to employ advanced technologies like facial age estimation, identification document checks, and photo matching to verify user ages. Notably, platforms such as Reddit, Bluesky, X (formerly Twitter), and Grindr have already complied with these requirements.

However, the OSA’s broad scope has led to concerns about overreach. The legislation encompasses over 200 types of content, many of which are vaguely defined. This includes not only explicit adult material but also information on topics like birth control, sexual health, and reporting sexual abuse. Critics argue that such expansive coverage could inadvertently restrict access to vital information for teenagers seeking guidance on these subjects.

United States’ Legislative Efforts

In the United States, similar legislative efforts are underway. The Kids Online Safety Act (KOSA), which closely mirrors the UK’s OSA, was reintroduced in the House of Representatives and is anticipated to become law later this year. This bill aims to hold websites and apps accountable for preventing minors from accessing age-inappropriate content, thereby necessitating comprehensive age verification processes.

Privacy and Data Security Concerns

The implementation of these age verification laws has sparked significant privacy concerns. Experts warn that requiring users to provide sensitive personal information, such as government-issued IDs or biometric data, could lead to surveillance issues and potential data breaches. The collection and storage of such data by unregulated third-party verification services pose substantial risks. For instance, the U.S. identity verification company AU10TIX previously exposed sensitive user data, including names, dates of birth, nationalities, identification numbers, and copies of identification documents.

Moreover, the effectiveness of these measures is questionable, as tech-savvy minors may circumvent restrictions using Virtual Private Networks (VPNs) or other methods. While regulators like the UK’s Office of Communications (Ofcom) discourage VPN use and warn that platforms promoting their use may face legal consequences, the reality is that enforcing such restrictions is challenging. Additionally, VPNs may be ineffective due to alternative location-tracking methods like GPS, cookies, and device fingerprinting.

Impact on Tech Companies and Platforms

The new regulations have placed significant pressure on tech companies to develop and implement age verification technologies. Platforms such as TikTok, Reddit, and X are introducing AI-based age assurance tools, including facial recognition and behavioral assessment models, to comply with the law. Ofcom will assess compliance until the end of September, with potential penalties reaching up to £18 million or 10% of global turnover for violations.

Apple has also introduced new child safety initiatives, including an age-checking system for apps. The company has developed an age assurance technology that enables parents to share a child’s age with app developers without providing sensitive information like birthdays or identification numbers. This development comes as several U.S. states and federal lawmakers consider age-verification laws for social media and other apps. States such as Utah and South Carolina are debating laws that would require app store operators, such as Apple and Google’s Alphabet, to verify users’ ages. Apple is opposed to collecting sensitive data for such verifications and instead proposes that parents can input a child’s age when setting up their account. Parents can allow children to share an age range rather than exact details with third-party developers, while maintaining control over this information.

Legislative Developments in U.S. States

Several U.S. states have taken legislative action to enforce age verification on app stores. Utah became the first state to pass legislation mandating app stores to verify users’ ages and obtain parental consent for minors’ app downloads. The law, known as the App Store Accountability Act, requires mobile app stores to verify ages, placing the responsibility on Apple and Google instead of individual apps like Instagram, Snapchat, and X. The law is designed to protect children who may not understand apps’ terms of services and, therefore, can’t agree to them. Apple and Google will need to request age verification checks when someone makes a new account in the state, likely using credit cards. If someone under 18 opens an app store account, Apple or Google will have to link it to a parent’s account or request additional documentation. Parents will have to consent to in-app purchases.

Texas is also on the verge of enacting Senate Bill 2420, mandating age verification on Apple and Google app stores. The bill would require app store providers to confirm device users’ ages and secure parental consent for those under 18 to download apps or make in-app purchases. This move parallels similar legislation in Utah and a proposed federal bill, reflecting growing bipartisan support for child online safety. The bill has drawn criticism from Apple and Google, who argue that it imposes excessive data collection requirements on uncontroversial apps and prefer selective age data sharing. Meanwhile, child safety advocates support the legislation, citing the failure of industry self-regulation.

Industry Response and Debate

The tech industry is divided on the issue of age verification. Meta, the parent company of Facebook and Instagram, supports legislation for app stores to verify ages and obtain parental approval for app downloads, arguing that app stores are in a better position to handle age verification. However, Apple and Google oppose this approach, contending that app developers should handle age verification. Apple argues that requiring age verification at the app marketplace level isn’t data minimization and would require collecting sensitive personal information from all users, even if they don’t access age-restricted apps.

Apple has introduced new child safety initiatives, including an age-checking system for apps. The company has developed an age assurance technology that enables parents to share a child’s age with app developers without providing sensitive information like birthdays or identification numbers. This development comes as several U.S. states and federal lawmakers consider age-verification laws for social media and other apps. States such as Utah and South Carolina are debating laws that would require app store operators, such as Apple and Google’s Alphabet, to verify users’ ages. Apple is opposed to collecting sensitive data for such verifications and instead proposes that parents can input a child’s age when setting up their account. Parents can allow children to share an age range rather than exact details with third-party developers, while maintaining control over this information.

Conclusion

The rollout of internet age verification laws in the UK and the US represents a significant shift in online regulation aimed at protecting minors. However, these measures have sparked a complex debate involving privacy concerns, the effectiveness of enforcement, and the responsibilities of tech companies. As these laws take effect, ongoing dialogue among lawmakers, tech companies, privacy advocates, and the public will be crucial to balance the protection of minors with the preservation of user privacy and access to information.