Thousands of Public Google Cloud API Keys Exposed, Granting Unauthorized Gemini Access
Recent research has uncovered a significant security vulnerability involving Google Cloud API keys. These keys, typically used as project identifiers for billing, can be exploited to authenticate to sensitive Gemini endpoints, potentially exposing private data.
Truffle Security identified nearly 3,000 Google API keys, recognizable by the AIza prefix, embedded in client-side code. These keys are often utilized to provide Google-related services, such as embedded maps on websites.
Security researcher Joe Leon highlighted the risk, stating that with a valid key, an attacker can access uploaded files, cached data, and charge large language model (LLM) usage to the victim’s account. He emphasized that these keys now authenticate to Gemini, even though they were never intended for such use.
The issue arises when users enable the Gemini API (Generative Language API) on a Google Cloud project. This action inadvertently grants existing API keys within that project, including those accessible via website JavaScript code, unauthorized access to Gemini endpoints without any warning.
This vulnerability allows attackers to scrape websites, obtain these API keys, and misuse them for malicious purposes, including accessing sensitive files through the /files and /cachedContents endpoints and making unauthorized Gemini API calls. Such activities can lead to substantial financial charges for the victims.
Furthermore, Truffle Security discovered that creating a new API key in Google Cloud defaults to an Unrestricted setting. This means the key is applicable for every enabled API in the project, including Gemini. Consequently, thousands of API keys initially deployed as benign billing tokens are now active Gemini credentials exposed on the public internet. Among the 2,863 live keys found, some were associated with a website linked to Google.
This disclosure coincides with a similar report from Quokka, which found over 35,000 unique Google API keys embedded in its scan of 250,000 Android apps. The mobile security company warned that beyond potential cost abuse through automated LLM requests, organizations must consider how AI-enabled endpoints might interact with prompts, generated content, or connected cloud services in ways that expand the blast radius of a compromised key.
Although this behavior was initially considered intended, Google has since addressed the issue. A Google spokesperson stated that they are aware of the report and have worked with the researchers to address the issue. Protecting users’ data and infrastructure is their top priority, and they have implemented proactive measures to detect and block leaked API keys attempting to access the Gemini API.
It remains unclear if this issue has been exploited in the wild. However, a recent Reddit post detailed a case where a stolen Google Cloud API Key resulted in $82,314.44 in charges between February 11 and 12, 2026, a significant increase from the regular monthly spend of $180.
Users with Google Cloud projects are advised to review their APIs and services to verify if artificial intelligence (AI)-related APIs are enabled. If such APIs are enabled and publicly accessible—either in client-side JavaScript or checked into a public repository—it’s crucial to rotate the keys. Truffle Security recommends starting with the oldest keys, as they are most likely to have been deployed publicly under the old guidance that API keys are safe to share and may have retroactively gained Gemini privileges when the API was enabled.
This situation underscores the dynamic nature of risk and how APIs can become over-permissioned over time. Tim Erlin, a security strategist at Wallarm, emphasized that security testing, vulnerability scanning, and other assessments must be continuous. He noted that APIs are particularly tricky because changes in their operations or the data they can access aren’t necessarily vulnerabilities but can directly increase risk. The adoption of AI running on these APIs accelerates the problem. Therefore, organizations must profile behavior and data access, identify anomalies, and actively block malicious activity.