
California is close to becoming the first U.S. state to regulate AI companion chatbots. On Wednesday night, the State Assembly passed SB 243 with bipartisan support. The bill now goes to the Senate for a final vote on Friday. If Governor Gavin Newsom signs it, the law will take effect on January 1, 2026.
The legislation targets AI companions — systems that mimic human-like conversations — and introduces safeguards for minors and vulnerable users. Under the bill:
- Chatbots cannot discuss suicidal thoughts, self-harm, or sexually explicit topics.
- Platforms must remind minors every three hours that they are speaking to AI, not a real person, and should take a break.
- AI companies must submit annual reports and face legal penalties for violations, with damages up to $1,000 per incident.
Lawmakers acted after rising concerns about the mental health risks of AI chatbots. The tragic suicide of Adam Raine, a teenager who exchanged harmful messages with ChatGPT, pushed the bill forward. Lawmakers also reacted to leaked documents showing that Meta’s chatbots engaged in “romantic” and “sensual” conversations with children.
Earlier drafts of the bill included stricter rules, such as banning “variable reward” tactics that critics say make chatbots addictive. Lawmakers later removed those measures, but the core protections remain.
State Senator Steve Padilla, who co-authored the bill, said:
“Innovation and regulation are not mutually exclusive. We can support healthy AI development while also protecting the most vulnerable.”
If the Senate approves SB 243, California will set a national precedent. Federal regulators are already preparing investigations. The FTC plans to examine how AI affects children’s mental health, while Texas Attorney General Ken Paxton has opened probes into Meta and Character.AI.
California’s decision could force AI companies to rethink how they design and manage chatbots, creating the first legally enforceable safeguards for young users in the United States.he first state to hold AI companion platforms accountable — setting new rules for how technology interacts with its youngest users.