Google Suspends OpenClaw Users Over OAuth Token Misuse
In a decisive move, Google has suspended access to its Antigravity AI platform for numerous OpenClaw users, citing violations of its Terms of Service (ToS) related to OAuth token misuse. This action has ignited a heated debate within the developer community regarding the enforcement measures and their broader implications.
Background on OpenClaw and Antigravity AI
OpenClaw, an open-source tool that gained rapid popularity since its November 2025 launch, enables users to deploy local AI agents for tasks such as email management and web browsing. By February 2026, it had amassed over 219,000 stars on GitHub, reflecting its widespread adoption.
Antigravity AI, on the other hand, is Google’s developer-facing platform that provides access to advanced AI models like Gemini 2.5 Pro. Developers typically access these models through official channels, adhering to Google’s usage policies.
The Issue of OAuth Token Misuse
The crux of the suspension lies in how OpenClaw users integrated with Antigravity AI. By leveraging OpenClaw’s OAuth plugin, developers accessed subsidized Gemini model tokens, effectively bypassing Google’s official channels. This unauthorized access led to significant backend spikes and service degradation, prompting Google’s automated systems to flag these activities as malicious usage patterns.
The Suspension Wave
Reports began surfacing in mid-February 2026 on Google’s AI Developer Forum, where subscribers to the AI Ultra plan, paying $249.99 monthly, encountered sudden 403 errors and full account restrictions without prior warnings. The repercussions were severe, with some users losing access not only to Antigravity and Gemini CLI but also to essential services like Gmail and Workspace, along with years of account history.
Varun Mohan, a product lead at Google DeepMind, addressed the situation on X (formerly Twitter), stating that the unauthorized usage tremendously degraded the quality of service for our users. He emphasized the need to prioritize legitimate users and mentioned a limited reinstatement path for those who unknowingly violated the terms, given the capacity constraints.
Industry Reactions and Security Concerns
The crackdown mirrors similar actions by other AI platforms. Anthropic, for instance, recently updated its ToS to explicitly ban third-party OAuth integrations in tools like OpenClaw, citing issues like token arbitrage and debugging challenges arising from anomalous traffic.
Security experts have also raised alarms about OpenClaw’s vulnerabilities. Over 21,000 exposed instances were identified, making them susceptible to information-stealing attacks targeting configuration files and supply chain threats. China’s industry ministry highlighted the risks associated with misconfigured systems, which can lead to cyberattacks and data breaches, underscoring the double-edged nature of agentic AI.
Peter Steinberger, the creator of OpenClaw and a recent addition to OpenAI’s team, criticized Google’s stringent enforcement measures, describing them as draconian. He announced plans to remove Antigravity support from OpenClaw, signaling a shift in the tool’s development trajectory.
The Broader Implications
The developer community is now pivoting towards alternative platforms like Nanobot and IronClaw. OpenAI’s endorsement of third-party integrations further widens the competitive landscape, highlighting the ongoing tension between open-source flexibility and proprietary platform restrictions.
This incident underscores the evolving dynamics of AI ecosystems. As AI agents become more prevalent, the enforcement of usage policies serves as a stark reminder that unauthorized access can lead to significant disruptions, affecting not only individual developers but also the broader user base.
Conclusion
Google’s suspension of OpenClaw users from Antigravity AI due to OAuth token misuse highlights the critical importance of adhering to platform terms of service. It also brings to the forefront the challenges developers face in navigating the complex landscape of AI integrations, balancing innovation with compliance.