Irregular Secures $80 Million in Funding to Enhance AI Security
On Wednesday, AI security firm Irregular announced a significant funding round of $80 million, led by prestigious investors Sequoia Capital and Redpoint Ventures. Notably, Wiz CEO Assaf Rappaport also participated in this round, which valued Irregular at approximately $450 million, according to sources close to the deal.
The Future of AI Interactions
Dan Lahav, co-founder of Irregular, emphasized the growing importance of human-on-AI and AI-on-AI interactions in driving economic activity. “Our view is that soon, a lot of economic activity is going to come from human-on-AI interaction and AI-on-AI interaction,” he told TechCrunch. “That’s going to break the security stack along multiple points.” This acknowledgment highlights the intricate challenges the AI industry is currently facing regarding security.
A Proven Track Record in AI Evaluation
Formerly known as Pattern Labs, Irregular has rapidly established itself as a key player in the realm of AI evaluations. The company’s expertise is recognized in security assessments for prestigious AI models, including Claude 3.7 Sonnet as well as OpenAI’s o3 and o4-mini iterations. Its innovative framework for scoring a model’s vulnerability-detection capabilities—dubbed SOLVE—has garnered widespread usage across the industry.
Addressing Emerging Risks
While Irregular has made significant progress in assessing existing risks, the company’s ambitious goals extend beyond conventional boundaries. It aims to identify emergent risks and behaviors before they manifest in real-world scenarios. To achieve this, Irregular has constructed an elaborate system of simulated environments, allowing for rigorous testing of models prior to their release into production.
Innovative Testing Environments
Co-founder Omer Nevo elaborated on these simulations, stating, “We have complex network simulations where we have AI both taking the role of attacker and defender. So when a new model comes out, we can see where the defenses hold up and where they don’t.” Such innovations aim to bolster defenses against potential threats posed by advanced AI systems.
Shifting Focus on AI Security
As the AI sector continues to evolve, security has emerged as a focal point of concern. The potential risks associated with frontier models have prompted industry leaders to overhaul their security protocols. For instance, OpenAI revamped its internal security measures over the summer, particularly to mitigate risks related to corporate espionage.
Moreover, enhanced AI models are increasingly skilled at discovering software vulnerabilities, which presents serious implications for both cyber attackers and defenders. The founders of Irregular recognize that this marks just the beginning of the myriad security challenges arising from advanced large language models.
A Promising Path Forward
“If the goal of the frontier lab is to create increasingly more sophisticated and capable models, our goal is to secure these models,” Lahav remarked. However, he acknowledged the need for ongoing effort, stating, “It’s a moving target, so inherently there’s much, much, much more work to do in the future.”
This funding round undoubtedly positions Irregular as a key defender in the evolving landscape of AI security, underlining the pressing need for solutions that can adapt to rapidly growing technologies.
For more detailed insights, you can read the full article Here.
Image Credit: techcrunch.com






