
TikTok is making a major shift in how it keeps its platform safe — and hundreds of human moderators in the UK may soon find themselves out of work. The platform, owned by ByteDance, is scaling back its reliance on human oversight, opting instead for artificial intelligence to take the lead in flagging and removing harmful content.
This restructuring is part of a global shake-up of TikTok’s trust and safety teams. According to company data, more than 85% of content removals for guideline violations are now being handled by automated systems. Once the backbone of TikTok’s moderation efforts, human moderators are increasingly being reassigned to other regions, moved into consolidated offices, or outsourced to third-party vendors.
On the business side, this strategy makes financial sense for TikTok. The company reported a 38% boost in UK and European revenue, reaching $6.3 billion in 2024, while its operating losses dropped to $485 million. By leaning heavily on AI, TikTok is streamlining operations and cutting costs — but the move raises some big questions.
Regulatory Pressures in the UK
The timing of the layoffs couldn’t be more complicated. The UK’s new Online Safety regulations require stricter age verification, more robust content moderation, and steep penalties — up to £18 million or 10% of global revenue — for non-compliance.
These rules emphasize the importance of human oversight, leaving analysts to wonder if TikTok’s AI-first approach will truly meet these standards. While automated systems excel at processing billions of uploads quickly, they often lack the nuanced understanding needed to handle sensitive or borderline content.
For TikTok, the gamble is clear: Can automation keep regulators happy while maintaining user safety at scale?
A Global Trend of Cuts
The UK isn’t alone in seeing these changes. Over the past year, TikTok has reduced moderation teams around the world. In the Netherlands, a 300-person unit was cut in September 2024. Soon after, Malaysia lost 500 moderation jobs, and Germany has seen worker strikes as staff protest similar restructuring.
Industry experts note this is part of a larger trend — tech platforms consolidating moderation hubs and leaning on AI to handle the unrelenting flow of digital content.
AI as the Future of Moderation
Across the social media industry, AI-powered moderation is quickly becoming the norm. Analysts project the market for AI moderation tools will grow 15% annually, driven by the need for scalable, cost-effective solutions.
Still, automation comes with risks. AI often struggles with cultural context, sarcasm, or political nuance — and excessive reliance on it could invite regulatory backlash.
Yet TikTok seems confident. By doubling down on AI, the company is betting that, sooner or later, regulators will accept automated moderation as not just viable but inevitable in managing today’s complex digital landscape.