Critical Vulnerability in Microsoft Copilot Exposes Backend Systems to Unauthorized Access

A significant security flaw has been identified in Microsoft Copilot Enterprise, enabling unauthorized users to gain root access to its backend container. This vulnerability poses substantial risks, including potential manipulation of system settings, unauthorized access to sensitive data, and overall compromise of the application’s integrity.

Origins of the Vulnerability

The issue traces back to an April 2025 update that introduced a live Python sandbox powered by Jupyter Notebook, designed to execute code seamlessly within Copilot. While intended to enhance functionality, this feature inadvertently opened avenues for exploitation, underscoring the inherent risks in integrating AI systems without robust security measures.

Discovery and Exploitation

Security researchers at Eye Security uncovered the vulnerability, likening the interaction with Copilot to managing an unpredictable child. By leveraging Jupyter’s `%command` syntax, they executed arbitrary Linux commands as the ‘ubuntu’ user within a Miniconda environment. Notably, although the user belonged to the sudo group, the absence of the sudo binary added complexity to the setup.

The sandbox environment mirrored models like ChatGPT but featured a newer kernel and Python 3.12, compared to ChatGPT’s 3.11 at the time. Further exploration revealed the sandbox’s core role in running Jupyter Notebooks alongside a Tika server. The container utilized a limited link-local network interface with a /32 netmask and an OverlayFS filesystem linked to a /legion path on the host.

Custom scripts resided in the /app directory. Through persistent commands, Copilot could be persuaded to download files or tar folders, copying them to /mnt/data for external access via blob links on outlook.office[.]com.

Technical Breakdown

A critical binary, `goclientapp` located in /app, acted as the container’s interface, running a web server on port 6000 for POST requests to /execute endpoints. Simple JSON payloads, such as `{code:%env}`, triggered code execution in the Jupyter environment. An `httpproxy` binary suggested potential future outbound traffic capabilities, though egress was disabled at the time.

The `entrypoint.sh` script, running as root, was pivotal. It dropped privileges for most processes but launched `keepAliveJupyterSvc.sh` as root. A significant oversight in line 28 involved a `pgrep` command without a full path, executed in a ‘while true’ loop every two seconds. This relied on the `$PATH` variable, which included writable directories like /app/miniconda/bin before /usr/bin, where the legitimate `pgrep` resides.

Exploiting this, researchers crafted a malicious Python script disguised as `pgrep` in the writable path. Uploaded via Copilot, it read commands from /mnt/data/in, executed them with `popen`, and outputted to /mnt/data/out. This granted root access, enabling filesystem exploration. However, no sensitive data or breakout paths were found, as known vulnerabilities had been patched.

Reporting and Response

Eye Security reported the issue to Microsoft’s Security Response Center (MSRC) on April 18, 2025. The vulnerability was addressed by July 25, classified as moderate severity. No bounty was awarded, but acknowledgment was given on Microsoft’s researcher page.

The researchers noted that the exploit yielded absolutely nothing beyond the exercise itself but hinted at further discoveries, including access to the Responsible AI Operations panel for Copilot and 21 internal services via Entra OAuth abuse.

Implications and Recommendations

This incident underscores the double-edged nature of AI sandboxes: while they offer innovative capabilities, they are also susceptible to creative attacks. Organizations integrating AI tools like Copilot must prioritize robust security measures, including:

– Regular Security Assessments: Conduct thorough evaluations of AI systems to identify and mitigate potential vulnerabilities.

– Access Controls: Implement strict role-based access controls to limit exposure and reduce the risk of unauthorized access.

– Continuous Monitoring: Establish ongoing monitoring mechanisms to detect and respond to suspicious activities promptly.

– User Education: Train users on the potential risks associated with AI tools and promote best practices for secure usage.

By adopting these strategies, organizations can enhance their security posture and mitigate the risks associated with AI integrations.