BotBeat
...
← Back

> ▌

MetaMeta
POLICY & REGULATIONMeta2026-03-17

Meta and TikTok Prioritized Engagement Over Safety, Whistleblowers Reveal in BBC Investigation

Key Takeaways

  • ▸Whistleblowers claim Meta and TikTok knowingly allowed more harmful content to increase engagement and compete for users, despite internal research showing the risks
  • ▸Meta engineers were allegedly instructed to allow 'borderline' harmful content due to competitive pressure from TikTok and concerns about stock price performance
  • ▸TikTok reportedly prioritized relationships with politicians over child safety, deprioritizing reports of harmful content involving minors to avoid regulatory threats
Source:
Hacker Newshttps://www.bbc.com/news/articles/cqj9kgxqjwjo↗

Summary

More than a dozen whistleblowers have exposed how Meta and TikTok made deliberate decisions to allow more harmful content on their platforms after internal research revealed that outrage drives user engagement. A Meta engineer described being instructed by senior management to allow "borderline" harmful content—including misogyny and conspiracy theories—to compete with TikTok's explosive growth, with the justification tied to declining stock prices. Similarly, a TikTok employee revealed that the company prioritized relationships with political figures over safety concerns, including deprioritizing reports of harmful content involving children in favor of protecting politicians from regulatory threats.

Internal research from Meta's own scientists demonstrated the scale of the problem: Instagram Reels, launched in 2020 without sufficient safeguards, showed significantly higher prevalence of bullying, harassment, hate speech, and violence compared to the rest of Instagram. Meta allegedly invested heavily in growing Reels—700 staff—while simultaneously refusing additional funding for child safety and election integrity teams. The companies have denied the allegations, with Meta stating that suggestions they "deliberately amplify harmful content for financial gain" are false, while TikTok called the claims "fabricated."

  • Internal Meta research confirmed that algorithms prioritize profit over user wellbeing, with the company aware that financial incentives are misaligned with stated mission
  • The opacity of recommendation algorithms makes it difficult to identify and address harmful amplification at scale, according to former engineers

Editorial Opinion

This investigation reveals a troubling pattern where algorithmic amplification of harmful content becomes a deliberate business strategy rather than an unintended consequence. While the companies deny purposefully amplifying harmful material, the whistleblower accounts and internal documents paint a picture of platforms where engagement metrics and competitive pressures systematically override safety considerations. The gap between these platforms' stated values and internal decision-making underscores why algorithmic transparency and regulatory oversight remain critical—the complexity of machine learning systems cannot serve as an excuse for abandoning accountability when human decisions about resource allocation and content policies clearly shape what billions of users see.

Regulation & PolicyEthics & BiasAI Safety & AlignmentMisinformation & Deepfakes

More from Meta

MetaMeta
RESEARCH

Meta-Research Project Tests Replicability of Social Science Claims, Finds Widespread Issues

2026-04-05
MetaMeta
FUNDING & BUSINESS

Meta Lays Off Hundreds in Silicon Valley While Doubling Down on $135 Billion AI Investment

2026-04-04
MetaMeta
POLICY & REGULATION

Meta Pauses Mercor Work After Data Breach Exposes AI Training Secrets

2026-04-03

Comments

Suggested

OracleOracle
POLICY & REGULATION

AI Agents Promise to 'Run the Business'—But Who's Liable When Things Go Wrong?

2026-04-05
AnthropicAnthropic
POLICY & REGULATION

Anthropic Explores AI's Role in Autonomous Weapons Policy with Pentagon Discussion

2026-04-05
PerplexityPerplexity
POLICY & REGULATION

Perplexity's 'Incognito Mode' Called a 'Sham' in Class Action Lawsuit Over Data Sharing with Google and Meta

2026-04-05
← Back to news
© 2026 BotBeat
AboutPrivacy PolicyTerms of ServiceContact Us