A recent analysis by Harmonic Security has revealed that employees in the United States and the United Kingdom are extensively utilizing Chinese-developed generative AI (GenAI) tools without the knowledge or approval of their organizations’ security teams. This unmonitored usage has led to numerous instances where sensitive data has been uploaded to platforms hosted in China, raising significant concerns regarding compliance, data residency, and the confidentiality of commercial information.
Scope of the Study
Over a 30-day period, Harmonic Security examined the activities of 14,000 employees across various companies. The findings were striking: nearly 8% of these employees engaged with China-based GenAI tools such as DeepSeek, Kimi Moonshot, Baidu Chat, Qwen (developed by Alibaba), and Manus. These platforms, while offering powerful capabilities and user-friendly interfaces, often lack transparency concerning the handling, storage, and reuse of uploaded data.
This trend highlights a growing disconnect between the rapid adoption of AI technologies and the implementation of governance measures, particularly in developer-centric organizations where the emphasis on quick results may overshadow adherence to established policies.
Extent of Data Exposure
The study uncovered that over 17 megabytes of content were uploaded to these Chinese platforms by 1,059 users. Within this data, 535 separate incidents involved sensitive information. Notably, nearly one-third of the exposed material comprised source code and engineering documentation. The remaining data included documents related to mergers and acquisitions, financial reports, personally identifiable information (PII), legal contracts, and customer records.
DeepSeek emerged as the most frequently used tool, accounting for 85% of the recorded incidents. Kimi Moonshot and Qwen also saw significant usage. This pattern indicates that GenAI tools are infiltrating corporate networks not through official channels but via unsanctioned, employee-driven adoption.
Implications of Data Policies
Chinese GenAI services often operate under permissive or opaque data policies. In certain instances, the terms of these platforms permit the use of uploaded content for further model training. This practice poses substantial risks for companies, especially those in regulated industries or those handling proprietary software and internal business strategies.
Implementing Technical Controls for Policy Enforcement
To address these challenges, Harmonic Security has developed tools designed to help enterprises regain control over GenAI usage within the workplace. Their platform offers real-time monitoring of AI activities and enforces policies at the point of use.
Organizations can implement granular controls to:
– Block access to specific applications based on their country of origin.
– Restrict the types of data that can be uploaded.
– Educate users through contextual prompts about the risks and policies associated with AI tool usage.
Governance as a Strategic Necessity
The unauthorized use of GenAI tools within enterprises is no longer a theoretical concern. Harmonic Security’s data indicates that nearly one in twelve employees is already interacting with Chinese GenAI platforms, often without understanding the associated risks related to data retention and jurisdictional exposure.
These findings suggest that mere awareness is insufficient. Organizations must implement active, enforced controls to facilitate the adoption of GenAI technologies without compromising compliance or security. As AI technology continues to evolve, the ability to govern its use will become as critical as the performance of the models themselves.
Harmonic Security enables businesses to harness the benefits of GenAI while mitigating unnecessary risks.