Google’s AI Shields Android Users from 10 Billion Monthly Scam Messages
In a significant advancement for mobile security, Google has announced that its integrated artificial intelligence (AI) defenses on Android devices are now intercepting over 10 billion suspected scam calls and messages each month. This proactive measure underscores the company’s commitment to safeguarding users from the ever-evolving landscape of digital threats.
A key component of this defense strategy is the blocking of more than 100 million suspicious numbers from utilizing Rich Communication Services (RCS). RCS, an evolution of the traditional SMS protocol, offers enhanced messaging features but has also become a target for malicious actors. By preemptively preventing these numbers from sending messages, Google effectively stops potential scams before they reach users’ inboxes.
Over recent years, Google has implemented various safeguards to combat phone call scams. Utilizing on-device AI, the system automatically filters known spam, directing them to the spam & blocked folder within the Google Messages app for Android. This ensures that users are less likely to encounter malicious content during their daily communications.
In October 2025, Google expanded its protective measures by globally rolling out safer links in Google Messages. This feature warns users when they attempt to click on URLs flagged as spam, preventing them from accessing potentially harmful websites unless the message is marked as not spam. This proactive approach adds an additional layer of security against phishing attempts and malicious links.
An analysis of user-submitted reports in August 2025 revealed that employment fraud is the most prevalent scam category. In these schemes, individuals seeking job opportunities are deceived with fake offers designed to steal personal and financial information. Other common scams include fraudulent notifications about unpaid bills, subscriptions, fees, and bogus investment opportunities. Additionally, there has been a rise in scams related to package deliveries, government agency impersonations, romance, and technical support.
A notable shift in scam tactics involves the use of group chats to target multiple victims simultaneously. By including multiple recipients, scammers aim to create a sense of legitimacy and urgency. In some cases, they add accomplices to the group to validate the initial message, making the conversation appear more credible.
Google’s analysis also identified distinct patterns in the timing of these scam messages. Activity typically begins around 5 a.m. PT in the U.S., peaking between 8 a.m. and 10 a.m. PT. Mondays see the highest volume of fraudulent messages, coinciding with the start of the workweek when individuals are often busiest and less vigilant.
Common tactics employed by scammers include the Spray and Pray approach, where a wide net is cast in hopes of ensnaring a small fraction of victims. These messages often induce a false sense of urgency, referencing topical events, package delivery notifications, or toll charges to prompt immediate action. Links within these messages are frequently shortened to obscure their true destination, leading unsuspecting users to malicious websites designed to harvest personal information.
Another method, known as Bait and Wait, involves a more calculated and personalized targeting strategy. Here, the scammer engages the victim in prolonged conversations, building trust over time. They may pose as recruiters or old friends, incorporating personal details gathered from public sources to enhance credibility. This patient approach aims to maximize financial loss by establishing a deeper connection before executing the scam.
Google’s continuous efforts to enhance Android’s security infrastructure reflect a broader commitment to user safety in the digital realm. By leveraging advanced AI technologies and analyzing emerging scam patterns, the company aims to stay ahead of malicious actors, providing users with a safer and more secure communication experience.