
The growing popularity of AI chatbots has sparked a new round of scrutiny — and this time, Texas is at the center of it. Texas Attorney General Ken Paxton has officially opened investigations into Meta’s AI Studio and Character.AI, accusing them of potentially misleading users by presenting themselves as mental health helpers.
Paxton argues that many of these AI “therapy-like” tools risk confusing vulnerable users, especially kids and teens, into believing they’re receiving professional emotional support. Instead of real counseling, the bots churn out pre-programmed, generalized responses that are influenced by personal data collection. The concern? Young people might mistake this digital advice for genuine therapy.
This investigation comes shortly after reports that Meta’s chatbots engaged in inappropriate conversations with children — including flirting. Critics worry that these AI personas often blur the line between entertainment and professional guidance, creating a dangerous gray area. For example, on Character.AI, a popular user-created bot called Psychologist is widely used by younger audiences, even though it’s not backed by any licensed expertise.
Both Meta and Character.AI insist they are transparent about their limitations. They highlight disclaimers telling users that these AIs aren’t real people and should not replace professional advice. Still, experts say children often overlook or ignore such warnings, treating the bots as trustworthy companions.
Adding fuel to the fire, privacy concerns loom large. Investigators say user chats and personal data may be tracked, logged, and used for targeted advertising or algorithm training. That raises big questions about whether kids’ private conversations are being mined for profit.
This controversy feeds into a larger debate over kids’ online safety. Proposed legislation like the Kids Online Safety Act (KOSA) aims to create stricter safeguards, but tech giants — Meta included — have lobbied hard against it, claiming it threatens their business models.
Now, with legal orders in motion, Texas is demanding answers. The big question remains: Are these AI platforms crossing the line between harmless fun and harmful exploitation?