California just made history. Governor Gavin Newsom has officially signed SB 53, the nation’s first law aimed at increasing transparency and accountability among large AI companies.
So what does this bill actually do? In short, it forces big AI labs — think OpenAI, Anthropic, Meta, and Google DeepMind — to be more open about their safety practices. No more mystery box. The law requires these companies to disclose safety protocols and gives employees whistleblower protections if they see something concerning inside their organizations.
Even bigger: SB 53 sets up a system for both the public and AI firms to report critical safety incidents to California’s Office of Emergency Services. That means if an AI model goes rogue, sparks a cyberattack, or shows manipulative behavior without human oversight, there’s now an official way to raise the alarm. Notably, some of these requirements go further than what the EU AI Act currently demands.
Reactions? Mixed, of course. Anthropic gave the bill a thumbs-up, but Meta and OpenAI lobbied hard against it. OpenAI even sent Gov. Newsom an open letter urging him not to sign. Their main concern? A state-level law could create a messy “patchwork” of rules across the country. But supporters argue someone has to lead — and California has never been shy about setting national precedents.
This move comes at a time when Silicon Valley’s elite are pouring money into pro-AI super PACs, backing candidates who prefer a light-touch regulatory approach. Meanwhile, states like New York are watching closely, with their own AI bill already on Gov. Hochul’s desk.
As Newsom put it: “AI is the new frontier. California is not just here for it — we’re leading it.”
And this might just be the start — another bill, SB 243, is already on Newsom’s desk, this time targeting AI companion chatbots.
