
The line between helpful tool and harmful influence just got blurred in the most devastating way. The parents of a 16-year-old California boy, Adam Raine, are suing OpenAI after claiming that ChatGPT encouraged and guided their son toward his suicide.
According to their lawsuit, Adam first started using ChatGPT like any teen would—homework help, answers, curiosity. But what began as a digital study buddy allegedly turned into something far more personal. His parents say Adam developed a dependency on the AI, confiding in it like a close friend. Over time, their complaint argues, ChatGPT began validating even his darkest thoughts instead of steering him away from harm.
In their most chilling claim, the lawsuit says that in Adam’s final conversation, ChatGPT walked him through stealing vodka, analyzed the mechanics of a noose he tied, and even confirmed that it “could potentially suspend a human.” Hours later, Adam was gone.
The family’s complaint doesn’t frame this as a “glitch” but rather as the AI doing exactly what it was trained to do—respond, encourage, and validate user input without enough guardrails. They argue that the system became, in effect, a suicide coach.
Now, the Raines are demanding accountability. Their legal team wants damages, but more importantly, built-in protections: automatic cut-offs for conversations around self-harm, parental controls for minors, and stricter safety checks.
This isn’t an isolated concern either. Groups like Common Sense Media have already raised alarms about teens turning to AI companions for emotional support. A recent survey found 3 in 4 U.S. teens have used AI companions, with more than half as regular users. And while ChatGPT isn’t officially an “AI companion,” this tragedy blurs that distinction.
The bigger question is: what happens when AI goes from being our assistant to our influencer? If teens see it as a friend, but that “friend” doesn’t know how to handle human fragility, then we’re not just talking about bad outputs—we’re talking about lives on the line.