Critical GitHub Copilot Vulnerability Allowed Silent Exfiltration of Private Repository Data

A significant security flaw in GitHub’s Copilot Chat, assigned a CVSS score of 9.6, was identified, potentially enabling attackers to covertly extract source code and sensitive information from private repositories. This vulnerability was addressed following a responsible disclosure through HackerOne.

Understanding the Vulnerability

GitHub Copilot is an AI-powered code completion tool that assists developers by providing code suggestions based on the context of their projects. Its design allows it to access and process information from repositories, including code snippets and pull request descriptions, to generate relevant suggestions.

Researchers at Legit Security discovered that malicious prompts could be embedded directly into pull request descriptions using GitHub’s invisible comments feature. These comments are not visible in the standard user interface but are still processed by Copilot. Consequently, if a developer utilized Copilot to review such a pull request, their session could be compromised without their knowledge.

Exploitation Mechanism

The attack leveraged Copilot’s context-aware functionality. By embedding a hidden malicious prompt within a pull request, an attacker could manipulate Copilot to execute unauthorized actions. Since Copilot operates with the same permissions as the user, it could be directed to access and manipulate data within the victim’s private repositories.

Circumventing Security Measures

GitHub employs a stringent Content Security Policy (CSP) to prevent data exfiltration to external domains. A key component of this policy is the use of Camo, a proxy service that securely renders images from third-party sites by converting external image URLs into signed `camo.githubusercontent.com` links. Only URLs with valid signatures generated by GitHub are processed, effectively blocking unauthorized data transmission via simple `` tags.

To bypass this security measure, the researchers devised an innovative approach:

1. Pre-Generated URL Dictionary: They created a dictionary of valid Camo URLs corresponding to every letter and symbol. Each URL pointed to a 1×1 transparent pixel hosted on a server under their control.

2. Data Encoding via Images: The malicious prompt instructed Copilot to locate sensitive information within the victim’s private repository, such as AWS keys or descriptions of zero-day vulnerabilities.

3. Stealthy Data Transmission: Copilot was then directed to draw this sensitive information as a sequence of invisible images using the pre-generated Camo URL dictionary. When the victim’s browser rendered these images, it sent a series of requests to the attacker’s server, effectively transmitting the sensitive data one character at a time.

Proof of Concept and Remediation

The researchers successfully demonstrated the exfiltration of code from a private repository using this method. Upon receiving the vulnerability report, GitHub acted promptly to mitigate the risk. On August 14, 2025, they disabled all image rendering within the Copilot Chat feature, effectively neutralizing the attack vector.

Implications for Developers

This vulnerability underscores the potential risks associated with integrating AI tools into development workflows. While AI assistants like Copilot can significantly enhance productivity, they also introduce new attack surfaces that can be exploited if not properly secured.

Recommendations for Developers:

– Stay Informed: Regularly monitor security advisories related to the tools and platforms you use.

– Review Pull Requests Carefully: Be vigilant when reviewing pull requests, especially those from unfamiliar sources. Look out for any unusual or hidden content.

– Limit AI Permissions: Configure AI tools to operate with the minimum necessary permissions to reduce potential impact in case of compromise.

– Implement Security Training: Educate development teams about potential AI-related security risks and best practices for mitigating them.

Conclusion

The discovery and prompt remediation of this GitHub Copilot vulnerability highlight the importance of continuous security assessments, especially as AI tools become more integrated into software development processes. Developers and organizations must remain proactive in identifying and addressing potential security issues to safeguard their code and sensitive information.