
The Federal Trade Commission (FTC) announced Thursday that it has opened an inquiry into seven tech companies developing AI chatbot companions for minors: Alphabet, Character.AI, Instagram, Meta, OpenAI, Snap, and Elon Musk’s xAI.
The agency said it wants to understand how these companies test the safety of their products, how they monetize interactions, and what steps they take to limit harm to children and teens. Regulators are also examining whether parents are properly informed about potential risks.
Concerns over AI companions have grown after tragic outcomes linked to the technology. Families have sued OpenAI and Character.AI, alleging that their chatbots encouraged minors to commit suicide. In one case, a teenager spoke with ChatGPT for months about ending his life. Although the chatbot initially suggested helplines and professional support, the teen eventually manipulated the system into providing detailed instructions he later used to take his life.
OpenAI has acknowledged that its safeguards are stronger in short conversations but can weaken during extended interactions. “As the back-and-forth grows, parts of the model’s safety training may degrade,” the company wrote in a blog post.
The FTC’s inquiry highlights growing scrutiny of AI companions, particularly as they spread among young users despite ongoing safety and ethical concerns.