
Meta has officially taken legal action against Crush AI, the shady AI nudify app that’s been flooding Facebook and Instagram with disturbing ads. In a lawsuit filed in Hong Kong, Meta alleges that Crush AI’s creators, operating under the name Joy Timeline HK, went out of their way to game Meta’s ad systems—spamming users with more than 8,000 ads in just the first two weeks of 2025.
Let that sink in.
Crush AI used generative AI to create fake, sexually explicit images of real people without their consent. Yes, you read that right. The so-called “AI undresser” app isn’t just unethical—it’s dangerous, invasive, and a clear abuse of emerging technology.
Despite Meta repeatedly removing ads tied to the app, Crush AI kept coming back. They allegedly set up dozens of fake advertiser accounts, rapidly changed domain names, and even used oddly specific page titles like “Eraser Annyone’s Clothes” with random numbers. The goal? Evade detection, stay ahead of filters, and keep the clicks coming.
And Meta isn’t the only platform caught in this mess. Platforms like X (formerly Twitter), Reddit, and YouTube have also been inundated with ads and links to similar AI undressing tools. In 2024, researchers saw a surge in traffic and ad exposure tied to these exploitative apps—often targeting vulnerable communities, including minors.
So now Meta is going beyond takedowns. It’s scaling up with new AI-powered tools to detect and remove this type of content—even if the ads don’t explicitly show nudity. They’re also flagging suspicious keywords, emoji, and domain behavior to get ahead of the next wave. As of early 2025, Meta says it has already dismantled four major ad networks linked to AI nudify tools.
But this isn’t just about platform safety. It’s about protecting people—and especially young users—from the real-world consequences of malicious AI.
Beyond lawsuits and ad bans, Meta says it’s sharing thousands of flagged URLs through the Tech Coalition’s Lantern program—a cross-platform alliance that includes Google, Snap, and others working to prevent online child exploitation. It’s also publicly backing legislation that gives parents more control over what apps their teens can access, including the US Take It Down Act.
Here’s the bottom line: The AI arms race isn’t just about who builds the best model—it’s also about who takes the responsibility to stop the worst uses of it.
This is a reminder that regulation, ethics, and enforcement matter—now more than ever.