Enterprise Data at Risk: 75% of Users Upload Sensitive Information to Generative AI Platforms

The rapid integration of generative artificial intelligence (GenAI) into enterprise environments has introduced significant data security challenges. A recent report from Netskope Threat Labs reveals that 75% of enterprise users are uploading data to GenAI applications, including highly sensitive information such as passwords, cryptographic keys, proprietary source code, regulated personal data, and intellectual property. This trend underscores the urgent need for organizations to reassess their data security strategies in the context of widespread GenAI adoption.

Surge in GenAI Usage and Data Transmission

Over the past year, there has been a 30-fold increase in data transmitted to GenAI applications by enterprise users. This surge encompasses a wide array of sensitive information, significantly elevating the risk of data breaches, compliance violations, and intellectual property theft. The report highlights that 72% of these users access GenAI platforms through personal accounts for work-related tasks, creating substantial security blind spots for organizations.

The Rise of “Shadow AI”

The phenomenon of “shadow AI” has emerged as a predominant challenge, akin to the earlier issue of shadow IT. Employees are increasingly utilizing GenAI tools without formal approval or oversight, often through personal accounts. James Robinson, Chief Information Security Officer at Netskope, emphasizes the gravity of this situation:

“Despite earnest efforts by organizations to implement company-managed GenAI tools, our research shows that shadow IT has turned into shadow AI, with nearly three-quarters of users still accessing GenAI apps through personal accounts.”

This trend underscores the necessity for advanced data security measures to regain governance, visibility, and control over GenAI usage within organizations.

Integration of GenAI Across Applications

The report identifies 317 GenAI applications in use, including popular platforms like ChatGPT, Google Gemini, and GitHub Copilot. Furthermore, 75% of enterprise users are accessing applications with embedded GenAI features, amplifying the challenge for security teams to address unintentional insider threats. While some organizations have adopted “block first and ask questions later” policies, security leaders are encouraged to pursue safe enablement strategies that balance productivity benefits with risk mitigation.

Shift Towards Local GenAI Infrastructure

A notable shift has occurred in GenAI infrastructure, with the number of organizations running GenAI locally increasing from less than 1% to 54% over the past year. While this move reduces the risks associated with data exposure to third-party cloud applications, it introduces new data security concerns related to supply chains, data leakage, and improper data output handling.

Recommendations for Enterprises

To address these challenges, Netskope recommends that enterprises:

– Assess the GenAI Landscape: Evaluate the current use of GenAI applications within the organization to identify vulnerabilities and gaps in security controls.

– Implement Core Controls: Establish fundamental security measures, such as access controls, authentication mechanisms, and encryption, to protect sensitive data.

– Plan for Advanced Controls: Develop a roadmap for advanced security controls, including threat modeling, anomaly detection, continuous monitoring, and behavioral detection to identify suspicious data movements.

– Regularly Evaluate Security Measures: Continuously assess the effectiveness of security measures and adapt them based on real-world experiences and emerging threats.

As the AI landscape continues to evolve, organizations must remain vigilant and proactive in their approach to data security. Integrating AI-driven security measures will be crucial in keeping pace with the rapidly changing threat landscape and ensuring the safe and responsible use of GenAI technologies in the enterprise environment.