The integration of generative AI platforms such as ChatGPT, Gemini, Copilot, and Claude into organizational workflows has significantly boosted efficiency across various tasks. However, this advancement introduces new challenges in data loss prevention (DLP). Sensitive information can inadvertently be shared through chat prompts, file uploads for AI-driven summarization, or browser plugins that bypass traditional security controls. Conventional DLP solutions often fail to detect these activities, necessitating an evolution in security strategies.
The Need for Evolved Data Loss Prevention Strategies
Traditional DLP tools primarily focus on endpoints and specific communication channels, such as email or storage systems. However, the dynamic nature of generative AI interactions requires a more comprehensive approach. Network Detection and Response (NDR) technologies, like Fidelis Network®, offer network-based DLP capabilities that provide visibility across the entire data traffic path. These solutions analyze traffic patterns, even when content is encrypted, to identify potential threats as data moves through the network.
The primary concern is not solely the origin of the data but also the methods and timing of its departure from organizational control. This includes direct uploads, conversational queries, or integrated AI features within business systems. By focusing on these aspects, organizations can better manage the risks associated with generative AI usage.
Effective Monitoring of Generative AI Usage
Implementing a robust DLP strategy for generative AI involves several key approaches:
1. URL-Based Indicators and Real-Time Alerts
Administrators can establish specific indicators for various generative AI platforms. These rules can be applied across multiple services and tailored to relevant departments or user groups. Monitoring can encompass web, email, and other communication channels.
Process:
– When a user accesses a generative AI endpoint, the NDR system generates an alert.
– If a DLP policy is triggered, the platform records a full packet capture for subsequent analysis.
– Web and mail sensors can automate actions, such as redirecting user traffic or isolating suspicious messages.
Advantages:
– Real-time notifications enable prompt security responses.
– Supports comprehensive forensic analysis as needed.
– Integrates with incident response playbooks and Security Information and Event Management (SIEM) or Security Operations Center (SOC) tools.
Considerations:
– Maintaining up-to-date rules is necessary as AI endpoints and plugins evolve.
– High generative AI usage may require alert tuning to avoid overload.
2. Metadata-Only Monitoring for Audit and Low-Noise Environments
Not all organizations require immediate alerts for every generative AI activity. Network-based DLP policies can record activities as metadata, creating a searchable audit trail with minimal disruption.
Process:
– Alerts are suppressed, and all relevant session metadata is retained.
– Sessions log source and destination IP addresses, protocols, ports, devices, and timestamps.
– Security teams can review all generative AI interactions historically by host, group, or time frame.
Benefits:
– Reduces false positives and operational fatigue for SOC teams.
– Enables long-term trend analysis and audit or compliance reporting.
Limits:
– Important events may go unnoticed if not regularly reviewed.
– Session-level forensics and full packet capture are only available if a specific alert escalates.
In practice, many organizations use this approach as a baseline, adding active monitoring only for higher-risk departments or activities.
3. Detecting and Preventing Risky File Uploads
Uploading files to generative AI platforms introduces higher risks, especially when handling Personally Identifiable Information (PII), Protected Health Information (PHI), or proprietary data. NDR solutions can monitor such uploads and enforce policies to prevent unauthorized data sharing.
Process:
– The NDR system detects file uploads to generative AI platforms.
– Policies are enforced to block or flag uploads containing sensitive information.
– Alerts are generated for security teams to investigate and respond accordingly.
Advantages:
– Prevents unauthorized sharing of sensitive data.
– Provides visibility into user activities involving file uploads.
– Supports compliance with data protection regulations.
Considerations:
– Requires regular updates to policies to address emerging threats.
– May necessitate user education to prevent inadvertent policy violations.
Conclusion
As generative AI platforms become integral to organizational operations, it is crucial to adapt security measures to address the unique challenges they present. Implementing network-based DLP strategies, such as those offered by NDR technologies, provides comprehensive visibility and control over data interactions with AI platforms. By adopting approaches like URL-based indicators, metadata monitoring, and file upload detection, organizations can effectively mitigate risks and ensure the secure use of generative AI tools.