TikTok, the short-form video hosting platform, has removed nearly 600,000 videos from its archives in Kenya, in Q2 2025, due to breaches of its community guidelines.
The company revealed this on Tuesday, during the latest release of its quarterly Community Guidelines Enforcement Report (CGER).
The takedown numbers for Q2 (April through June) represent a continued increase on previous quarters, with over 450,000 videos removed in Q1 2025 and more than 360,000 videos deleted in Q2 2024.
This upward trend highlights both a growth in video content on the platform as well increasingly aggressive content moderation efforts by the platform itself, following direct pressure from Kenyan authorities.
In March 2025, the Communications Authority of Kenya (CA) issued five formal demands to TikTok after a BBC investigation exposed the platform's alleged failure to prevent minors from being exploited in sexualised livestreams.
The CA's demands to TikTok included actively pulling down all sexual content involving minors, launching a formal inquiry into the allegations, explaining how offensive content bypassed its moderation systems and intensifying public education on child online protection.
The platform's Q2 enforcement report, which details a high volume of proactive removals, appears to be a direct response to that pressure. According to TikTok, 92.9% of the 592,037 videos were removed before they were viewed, and 96.3% were removed within 24 hours of being posted, suggesting its automated systems are working to catch violations early.
This effort aligns with TikTok’s global moderation impact, where over 189 million videos were removed worldwide during the same three-month period, representing just 0.7% of all content uploaded. 99.1% of these removals were detected proactively, 94.4% were taken down within 24 hours, and 163.9 million videos were automatically removed by AI-driven moderation systems.
Globally, the platform removed 77 million fake accounts, along with an additional 26 million accounts where users were suspected to be under the age of 13.
The company claims that the removal of content that contravenes its community guidelines is “vital in mitigating the damaging effects of misinformation, hate speech, and other violative content material on the platform.”
Share
