Irregular Secures $80 Million to Fortify Advanced AI Models Against Emerging Threats

In a significant move to bolster the security of cutting-edge artificial intelligence (AI) systems, Irregular, an AI security firm formerly known as Pattern Labs, has successfully raised $80 million in a recent funding round. This round was led by prominent venture capital firms Sequoia Capital and Redpoint Ventures, with additional participation from Assaf Rappaport, CEO of Wiz. The investment has elevated Irregular’s valuation to approximately $450 million.

Co-founder Dan Lahav highlighted the evolving landscape of AI interactions, stating, Our view is that soon, a lot of economic activity is going to come from human-on-AI interaction and AI-on-AI interaction, and that’s going to break the security stack along multiple points. ([techcrunch.com](https://techcrunch.com/2025/09/17/irregular-raises-80-million-to-secure-frontier-ai-models/?utm_source=openai))

Pioneering AI Security Evaluations

Irregular has established itself as a key player in the realm of AI security evaluations. The company’s assessments have been integral to the security evaluations of models such as Claude 3.7 Sonnet and OpenAI’s o3 and o4-mini models. Central to their methodology is the SOLVE framework, a widely adopted system for assessing a model’s vulnerability detection capabilities. ([techcrunch.com](https://techcrunch.com/2025/09/17/irregular-raises-80-million-to-secure-frontier-ai-models/?utm_source=openai))

Anticipating and Mitigating Emerging Risks

Beyond addressing existing vulnerabilities, Irregular is proactively focusing on identifying and mitigating emergent risks and behaviors in AI models before they manifest in real-world scenarios. The company has developed sophisticated simulated environments that allow for intensive pre-release testing of AI models. Co-founder Omer Nevo elaborated on this approach, stating, We have complex network simulations where we have AI both taking the role of attacker and defender. So when a new model comes out, we can see where the defenses hold up and where they don’t. ([techcrunch.com](https://techcrunch.com/2025/09/17/irregular-raises-80-million-to-secure-frontier-ai-models/?utm_source=openai))

Addressing the Growing Importance of AI Security

The AI industry is increasingly prioritizing security, especially as advanced models present new and complex risks. For instance, OpenAI recently overhauled its internal security measures to address potential corporate espionage concerns. Additionally, AI models are becoming more adept at identifying software vulnerabilities, a capability with significant implications for both cybersecurity defenders and potential attackers. ([techcrunch.com](https://techcrunch.com/2025/09/17/irregular-raises-80-million-to-secure-frontier-ai-models/?utm_source=openai))

A Vision for Secure AI Deployment

Irregular’s mission aligns with the broader goal of ensuring that as AI models become more sophisticated and capable, they are also secure. Lahav emphasized this commitment, stating, If the goal of the frontier lab is to create increasingly more sophisticated and capable models, our goal is to secure these models. But it’s a moving target, so inherently there’s much, much, much more work to do in the future. ([techcrunch.com](https://techcrunch.com/2025/09/17/irregular-raises-80-million-to-secure-frontier-ai-models/?utm_source=openai))

Strategic Partnerships and Industry Impact

Irregular’s collaborative efforts with leading AI labs such as OpenAI and Anthropic have been instrumental in shaping industry standards. The company’s evaluations are cited in OpenAI’s system cards for GPT-4 o3, o4 mini, and GPT-5. Furthermore, the UK government and Anthropic utilize Irregular’s SOLVE framework to assess cyber risks in models like Claude 4. These partnerships underscore Irregular’s pivotal role in defining security frameworks for the safe deployment of advanced AI systems. ([newswire.com](https://www.newswire.com/news/irregular-raises-80-million-to-set-the-security-standards-for-frontier-ai?utm_source=openai))

Innovative Testing Methodologies

To rigorously test AI models, Irregular employs controlled simulations that mimic real-world threats, including antivirus evasion and autonomous offensive actions. This proactive approach enables AI developers to uncover vulnerabilities early and implement necessary safeguards before public deployment. By simulating both attacker and defender roles within complex network environments, Irregular provides a comprehensive assessment of a model’s resilience. ([techcrunch.com](https://techcrunch.com/2025/09/17/irregular-raises-80-million-to-secure-frontier-ai-models/?utm_source=openai))

Investor Confidence and Future Outlook

The substantial investment from Sequoia Capital and Redpoint Ventures reflects strong confidence in Irregular’s mission and capabilities. Shaun Maguire, a partner at Sequoia Capital, remarked, The real AI security threats haven’t emerged yet. What stood out about the Irregular team is how far ahead they’re thinking. They’re working with the most advanced models being built today and laying the groundwork for how we’ll need to make AI reliable in the years ahead. ([siliconangle.com](https://siliconangle.com/2025/09/17/irregular-raises-80m-set-ai-security-standards-frontier-models/?utm_source=openai))

Conclusion

As AI technologies continue to advance at an unprecedented pace, the importance of robust security measures cannot be overstated. Irregular’s recent funding and strategic initiatives position the company at the forefront of AI security, ensuring that the next generation of AI models is both powerful and secure. Through innovative testing methodologies, strategic partnerships, and a forward-thinking approach, Irregular is set to play a crucial role in shaping the future of secure AI deployment.