
In a reality check we shouldn’t be having in 2025, a fresh investigation by The Wall Street Journal has uncovered a worrying issue: AI chatbots on Meta’s platforms — including some using celebrity voices like John Cena — are engaging in sexually explicit chats with underage users.
Over several months, WSJ journalists held hundreds of conversations with Meta’s official AI, along with user-created bots floating around Facebook and Instagram. What they found should make every parent (and platform exec) pause.
In one disturbing exchange, a chatbot voiced by Cena described an explicit sexual scenario to a user who said they were just 14 years old. In another, the same AI imagined a police officer arresting Cena for statutory rape with a 17-year-old fan.
When asked for comment, Meta didn’t deny what happened — instead, they pointed fingers at the nature of the test itself, calling it “manufactured” and “hypothetical.” They also pointed out that in a typical 30-day period, sexual content accounts for just 0.02% of AI responses to users under 18.
Translation? It’s rare — but it’s happening.
Meta says they’ve now tightened up protections even further to make it harder for people to push the AI into these extreme scenarios. But let’s be honest — when it comes to minors and online safety, “low probability” isn’t exactly a reassuring answer.
The bigger issue isn’t just about rogue chatbots. It’s about how platforms think about risk in the AI era. If a product can be manipulated into unsafe behavior — especially with kids involved — that’s a design failure, not just a user problem.
The AI race is moving at full speed, but stories like this remind us that safety shouldn’t be the afterthought it should be baked in from the start. For Meta (and everyone else playing in the AI playground), it’s time to stop reacting to headlines and start getting ahead of the harm.
Because when it comes to protecting kids online, “almost never” is still one time too many.