In 2024, Google significantly intensified its efforts to combat ad fraud by suspending 39.2 million advertiser accounts, more than tripling the previous year’s figures. This substantial increase underscores the company’s commitment to maintaining a secure and trustworthy advertising ecosystem.
Central to this initiative is the deployment of advanced large language models (LLMs). These AI-driven systems analyze various indicators, such as business impersonation and fraudulent payment information, enabling Google to preemptively suspend the majority of malicious accounts before they can disseminate any ads. In 2024 alone, Google implemented over 50 enhancements to its LLMs, bolstering its safety enforcement mechanisms across all platforms.
Alex Rodriguez, General Manager for Ads Safety at Google, emphasized the synergy between AI and human oversight in this process. He highlighted that while AI models have delivered impressive improvements, human experts remain integral to the system, ensuring nuanced and context-aware decision-making.
A notable area of concern has been the proliferation of deepfake ad scams, particularly those involving the impersonation of public figures. To address this, Google assembled a dedicated team comprising members from the Ads Safety team, the Trust and Safety division, and researchers from DeepMind. This collaborative effort led to the development of technical countermeasures and the introduction of over 30 updates to ads and publisher policies. These measures resulted in the suspension of over 700,000 offending advertising accounts and a reported 90% reduction in deepfake ad incidents.
Geographically, the United States experienced the highest number of account suspensions, with 39.2 million accounts suspended and 1.8 billion ads removed. The primary violations in the U.S. included ad network abuse, trademark misuse, healthcare claims, personalized ads, and misrepresentation. India followed, with 2.9 million account suspensions and 247.4 million ads removed, predominantly due to violations related to financial services, trademark misuse, ad network abuse, personalized ads, and gambling.
In total, Google removed nearly half a billion ads associated with scams and suspended 5 million accounts for scam-related violations. The company also verified over 8,900 new election advertisers in 2024—a year marked by significant global electoral activity—and removed 10.7 million election-related ads. Despite the high volume of election ads, Rodriguez noted that their proportion relative to Google’s overall ad inventory was minimal and unlikely to significantly impact safety metrics.
Overall, Google blocked 5.1 billion ads and removed 1.3 billion pages in 2024. These figures represent a decrease from the 5.5 billion ads blocked and 2.1 billion publisher pages acted upon in 2023. This decline suggests improvements in Google’s preventive measures, as enhanced early detection and suspension of malicious accounts have led to fewer harmful ads reaching the platform.
Additionally, Google restricted 9.1 billion ads in 2024. While large-scale suspensions can raise concerns about potential overreach, Google’s integration of AI with human oversight aims to balance robust enforcement with fairness, ensuring that legitimate advertisers are not unduly affected.
These concerted efforts reflect Google’s ongoing commitment to safeguarding the integrity of its advertising platforms, leveraging cutting-edge AI technologies alongside human expertise to stay ahead of evolving fraudulent tactics.