Meta and TikTok Prioritized Engagement Over Safety, Allowing Harmful Content to Spread, Whistleblowers Reveal
Key Takeaways
- ▸Multiple whistleblowers reveal Meta and TikTok knowingly prioritized engagement and profit over user safety when deciding content moderation policies
- ▸Internal Meta research confirmed that Instagram Reels had significantly higher prevalence of bullying, hate speech, and violent content but received more resources than safety teams
- ▸TikTok allegedly prioritized political relationships over child safety in content moderation decisions to avoid regulatory threats
Summary
Over a dozen whistleblowers have exposed how Meta and TikTok made deliberate decisions to allow more harmful content—including violence, sexual blackmail, terrorism, and conspiracy theories—to proliferate on their platforms in pursuit of user engagement and competitive advantage. An engineer at Meta revealed that senior management instructed staff to allow "borderline" harmful content to compete with TikTok's explosive growth, with one source citing declining stock prices as justification. Internal research shown to the BBC demonstrated that Instagram Reels, Meta's TikTok competitor launched in 2020, had significantly higher rates of bullying, harassment, hate speech, and violent content compared to other Instagram features, yet safety teams were denied additional staff while growth teams received 700 new hires.
A TikTok insider provided rare access to internal dashboards revealing that the company prioritized maintaining relationships with political figures over addressing harmful content involving children, citing concerns about regulation and potential bans. Meta's own research acknowledged that its algorithms offered content creators "a path that maximizes profits at the expense of their audience's wellbeing" and that current financial incentives were misaligned with the company's stated mission. Both companies deny the allegations, with Meta claiming suggestions of deliberate amplification of harmful content are "wrong" and TikTok dismissing the claims as "fabricated," while machine-learning engineers describe recommendation algorithms as difficult-to-control "black boxes."
- Both companies' algorithms operate as opaque systems with limited accountability, making it difficult to identify and address harm
- Financial incentives and competitive pressures drove companies to relax safety standards rather than invest in content moderation
Editorial Opinion
This damning investigation exposes a troubling gap between the public commitments social media giants make about user safety and their actual operational priorities. The revelation that platforms knowingly allowed more harmful content to circulate because internal research showed outrage drove engagement represents a fundamental breach of trust with users—and particularly with young people who rely on these platforms. While both Meta and TikTok deny wrongdoing, the specificity and corroboration from multiple insiders suggest algorithmic profit optimization has systematically triumphed over platform integrity, raising critical questions about whether voluntary safety measures can ever be sufficient without stronger regulatory oversight.



