
AI security startup Irregular just bagged a fresh $80 million funding round, with Sequoia Capital and Redpoint Ventures leading the charge. Wiz CEO Assaf Rappaport also jumped in, and according to sources, the deal now pegs Irregular’s valuation at a hefty $450 million.
So, what’s the big idea here? Co-founder Dan Lahav laid it out bluntly: the future isn’t just humans interacting with AI — it’s also AI clashing with AI. And when that happens, traditional security systems are going to snap in more places than one.
If Irregular sounds familiar, that’s because it’s already made its mark in AI safety. Formerly known as Pattern Labs, the company’s evaluation frameworks are cited in security reports for Claude 3.7 Sonnet, as well as OpenAI’s o3 and o4-mini models. Their scoring system, SOLVE, which measures how well an AI can spot vulnerabilities, has basically become an industry benchmark.
But Irregular isn’t just playing defense on today’s risks. The real mission is futuristic: sniffing out emerging threats before they even appear in the wild. To do this, they’ve built wild simulation environments where AIs are tested in attacker-versus-defender mode. Think of it like a digital war game where new models get stress-tested to see where the cracks form.
Co-founder Omer Nevo explained it like this: “When a new model drops, we put it into our simulated networks and let AI attack itself. That’s how we see what holds up and what falls apart.”
The timing couldn’t be more critical. AI security is suddenly the industry’s hottest topic. OpenAI recently revamped its own internal defenses to prepare for threats like corporate espionage. Meanwhile, models are getting scary-good at digging up software vulnerabilities — which is a double-edged sword depending on who’s wielding them.
For Irregular, this is just the beginning of a long security battle. Lahav summed it up: “Labs are focused on building more advanced AI. Our job is to keep those models secure. But security is a moving target — and the work ahead is massive.”