
….Meta’s AI Chatbots: When “Oops” Turns Into “Wait, WHAT?!”
The internet is buzzing — and not in the good “someone dropped a new AI tool” kind of way. According to a bombshell Reuters report, Meta’s own internal playbook for its AI assistants (you know, the chatbots living on Facebook, WhatsApp, and Instagram) had some seriously questionable rules.
We’re talking about guidelines that reportedly allowed AI personas to have “romantic or sensual” conversations with kids. Yup, you read that right. The internal document, approved by Meta’s legal, policy, engineering, and ethics teams, even included examples like: “Our bodies entwined, I cherish every moment…” — in response to a high schooler.
Meta later told TechCrunch, “Nah, that was an error, those notes shouldn’t have been there” — and claims the guidelines have since been scrubbed. But child safety advocates aren’t buying it. They want Meta to publish the updated rules, because when it comes to kids, vague PR statements just won’t cut it.
And that’s not all. The same 200-page “GenAI: Content Risk Standards” doc also reportedly gave chatbots wiggle room to:
- Spit out racist and demeaning stereotypes under the guise of “fact-based” responses.
- Make up false info — as long as they admit it’s fake.
- Generate semi-inappropriate celebrity images, so long as the nudity was “creatively covered” (yes, someone actually approved “enormous fish” as an example).
- Show violence, including kids fighting, and elderly people getting punched — but draw the line at gore.
Oh, and all of this is happening while Meta pushes harder into AI “companions” — a move CEO Mark Zuckerberg has pitched as part of tackling the “loneliness epidemic.” But critics say that for teens and kids (72% of whom already use AI companions), this is a risky game. Emotional attachment to bots isn’t just a Black Mirror episode — it’s already linked to dangerous real-world outcomes.
Meta insists it’s cleaned house. But with its history of teen-targeting algorithms, dark patterns, and fierce opposition to kid-safety laws, people are asking: is this a genuine fix… or just another “trust us” moment?