California’s newly signed AI safety and transparency law, SB 53, stands as a strong example that state-level regulation doesn’t have to conflict with innovation in artificial intelligence. According to Adam Billen, vice president of public policy at Encode AI, the bill proves that it is possible to create policies that protect public safety without stifling technological advancement.
At its core, SB 53 is the first law in the U.S. to require major AI developers to be transparent about how they test and secure their systems against severe threats, such as cyberattacks or bioweapon creation. The bill also mandates that these companies follow their own safety protocols, which will be monitored and enforced by California’s Office of Emergency Services. Billen points out that many AI companies already perform these safety measures voluntarily — such as releasing model cards and conducting internal testing — but some have started to ease off under competitive pressures. Therefore, the bill simply enforces practices that responsible developers should already be doing.
Billen highlights the concerning trend where companies, including OpenAI, have stated that they might lower their safety standards if rivals release high-risk models without safeguards. This shows why legislation like SB 53 is necessary — to prevent a race to the bottom when it comes to safety.
While some in Silicon Valley argue that any regulation slows progress and weakens the U.S. against competitors like China, Billen strongly disagrees. He believes that this argument is used more to serve industry interests than national security. For example, large tech players and VCs have poured resources into pro-AI political campaigns and even supported proposals for a decade-long ban on state regulation. Encode AI successfully opposed one such moratorium, but Senator Ted Cruz is now trying a different approach through the SANDBOX Act, which would let companies bypass regulations under the guise of innovation.
Billen warns that narrowly written federal legislation could override effective state laws, harming the very system of federalism that allows diverse and adaptive policymaking. He stresses that bills like SB 53 are not about crippling the AI race — instead, they focus on key issues like deepfakes, transparency, child safety, and government use of AI.
While federal measures like the CHIPS and Science Act aim to strengthen America’s AI capabilities by controlling chip exports to China and boosting domestic manufacturing, some companies have resisted these efforts. Billen speculates that financial interests — such as Nvidia’s dependence on Chinese markets — may be influencing these positions.
Ultimately, Billen sees SB 53 as a successful example of collaborative democracy, where industry and lawmakers came together to create meaningful, balanced regulation. Though the process was imperfect, he believes it reflects the foundational principles of the U.S. system — and shows that thoughtful state-level regulation is still possible, even in a highly contentious space like AI.
