Traditional Cybersecurity Frameworks Struggle Against Rise of AI-Specific Threats

Traditional Security Frameworks Fall Short Against AI-Specific Cyber Threats

In recent years, the cybersecurity landscape has witnessed a surge in sophisticated attacks targeting artificial intelligence (AI) systems. Notable incidents include the December 2024 compromise of the Ultralytics AI library, where attackers injected malicious code into the build environment, leading to unauthorized cryptocurrency mining. Similarly, in August 2025, malicious Nx packages were discovered exfiltrating sensitive credentials from compromised systems. Throughout 2024, vulnerabilities in ChatGPT allowed unauthorized extraction of user data from AI memory. Collectively, these breaches resulted in the leakage of 23.77 million secrets through AI systems in 2024 alone, marking a 25% increase from the previous year.

These incidents share a troubling commonality: the affected organizations had robust security programs, passed audits, and met compliance requirements. However, their security frameworks were not designed to address the unique vulnerabilities inherent in AI systems.

The Limitations of Traditional Security Frameworks

Established security frameworks such as the NIST Cybersecurity Framework, ISO 27001, and CIS Controls have long provided comprehensive guidelines for protecting traditional IT assets. However, these frameworks were developed in an era before the widespread adoption of AI technologies and do not account for AI-specific attack vectors.

For instance, the NIST CSF 2.0, released in 2024, primarily focuses on traditional asset protection. ISO 27001:2022 offers a broad approach to information security but lacks specific guidance on AI vulnerabilities. CIS Controls v8 emphasizes endpoint security and access controls without addressing the nuances of AI threats.

Rob Witcher, co-founder of cybersecurity training company Destination Certification, highlights this gap:

Security professionals are facing a threat landscape that’s evolved faster than the frameworks designed to protect against it. The controls organizations rely on weren’t built with AI-specific attack vectors in mind.

This disconnect has led to a growing demand for specialized AI security certifications that address these emerging threats.

Understanding AI-Specific Attack Vectors

AI systems introduce new attack surfaces that traditional security controls do not adequately cover. Key AI-specific attack vectors include:

1. Prompt Injection: This involves manipulating AI behavior through carefully crafted natural language inputs. Traditional input validation controls are designed to detect malicious structured input, such as SQL injection or cross-site scripting, but they are ineffective against prompt injection attacks that use valid natural language to bypass authentication mechanisms.

2. Model Poisoning: Attackers can corrupt the training data of AI models, leading the system to learn and propagate malicious behavior as part of its normal operation. This type of attack occurs during the authorized training process, making it difficult for traditional system integrity controls to detect.

3. Adversarial Attacks: These exploit the mathematical properties of machine learning models by using inputs that appear normal to humans but cause the model to produce incorrect outputs. Traditional configuration management controls are not equipped to prevent such sophisticated manipulations.

Real-World Implications

The consequences of these gaps are not merely theoretical. They have manifested in real-world breaches:

– Ultralytics AI Library Compromise (December 2024): Attackers injected malicious code into the build environment after the code review process but before publication. This targeted the AI development pipeline—a component that traditional software supply chain controls were not designed to protect.

– ChatGPT Vulnerabilities (November 2024): Carefully crafted prompts allowed attackers to extract sensitive information from users’ conversation histories. Organizations with strong network security and access controls were still vulnerable because these controls did not address malicious natural language input designed to manipulate AI behavior.

– Malicious Nx Packages (August 2025): These packages used AI assistants to enumerate and exfiltrate secrets from compromised systems. Traditional security controls focus on preventing unauthorized code execution but are not designed to handle AI development tools that execute code based on natural language instructions.

These incidents underscore a common pattern: security teams implemented the controls required by their frameworks, which protected against traditional attacks but failed to cover AI-specific attack vectors.

The Scale of the Problem

The scale of exposure is significant. According to IBM’s Cost of a Data Breach Report 2025, organizations take an average of 276 days to identify a breach and another 73 days to contain it. For AI-specific attacks, detection times are potentially even longer because security teams lack established indicators of compromise for these novel attack types. Sysdig’s research shows a 500% surge in cloud workloads containing AI/ML packages in 2024, indicating that the attack surface is expanding far faster than defensive capabilities.

Organizations are deploying AI systems across their operations, including customer service chatbots, code assistants, data analysis tools, and automated decision systems. Most security teams cannot even inventory the AI systems in their environment, much less apply AI-specific security controls that frameworks do not require.

Bridging the Gap: What Organizations Need

The disparity between existing security frameworks and the needs of AI systems necessitates that organizations go beyond compliance. Waiting for frameworks to be updated is not a viable option, as attacks are occurring now.

Organizations need to develop new technical capabilities, including:

– Prompt Validation and Monitoring: Detecting malicious semantic content in natural language inputs, not just structured input patterns.

– Model Integrity Verification: Validating model weights and detecting poisoning, which current system integrity controls do not address.

– Adversarial Robustness Testing: Conducting red teaming focused specifically on AI attack vectors, beyond traditional penetration testing.

Traditional data loss prevention focuses on detecting structured data, such as credit card numbers and social security numbers. AI systems require semantic DLP capabilities that can identify sensitive information embedded in unstructured conversations. For example, when an employee asks an AI assistant to summarize a document and pastes in confidential business plans, traditional DLP tools may miss it because there is no obvious data pattern to detect.

AI supply chain security demands capabilities that go beyond vendor assessments and dependency scanning. Organizations need methods for validating pre-trained models, verifying dataset integrity, and detecting backdoored weights. The SR control family in NIST SP 800-53 does not provide specific guidance here because these components did not exist in traditional software supply chains.

The bigger challenge is knowledge. Security teams need to understand these threats, but traditional certifications do not cover AI attack vectors. The skills that made security professionals excellent at securing networks, applications, and data are still valuable—they are just not sufficient for AI systems. This is not about replacing security expertise; it is about extending it to cover new attack surfaces.

Conclusion

As AI systems become increasingly integrated into organizational operations, the limitations of traditional security frameworks become more apparent. To effectively protect against AI-specific threats, organizations must develop new technical capabilities, enhance their security knowledge, and go beyond compliance to implement proactive measures tailored to the unique vulnerabilities of AI technologies.