BotBeat
...
← Back

> ▌

MercorMercor
INDUSTRY REPORTMercor2026-05-11

From Hollywood to the Prompt: Why Writers Are Training AI

Key Takeaways

  • ▸Hollywood writers and creatives are pivoting to AI data annotation and training work, driven by production collapse and financial desperation
  • ▸AI training companies require extensive unpaid labor for screening and testing, despite offering premium hourly rates
  • ▸The work is diverse and specialized—image annotation, red-teaming, safety testing—but emotionally taxing and poorly managed
Source:
Hacker Newshttps://www.wired.com/story/i-work-in-hollywood-everyone-who-used-to-make-tv-now-training-ai/↗

Summary

A Hollywood writer-turned-AI-trainer reveals how unemployed creatives are turning to AI data annotation work to survive amid entertainment industry stagnation. Following the 2023 AI strike and a subsequent production collapse in early 2025, entertainment workers are migrating to platforms like Mercor, Outlier, and others to label data, annotate images and video, and red-team AI models for safety testing. The work offers seemingly attractive hourly rates ($52–$150), but requires extensive unpaid screening and testing, complex task management systems, and emotionally taxing assignments—including generating harmful content and misinformation to test AI safeguards. The trend underscores broader challenges in AI training labor, where skilled professionals from creative industries are being leveraged to build and evaluate AI systems at a fraction of their previous earnings.

  • The trend raises concerns about who is training AI systems and under what conditions, with implications for AI safety and bias

Editorial Opinion

The migration of creative professionals into AI training work reveals uncomfortable truths about how AI systems are being built. While companies like Mercor tout competitive hourly rates, the hidden costs—unpaid screening hours, byzantine tools, and emotional burden—amount to substantial wage suppression. More critically, entrusting critical AI safety work to overworked, underpaid contractors recruited from a desperate talent pool may be compromising the quality of safeguards that increasingly shape public discourse.

Machine LearningEntertainment & MediaEthics & BiasAI Safety & AlignmentJobs & Workforce Impact

More from Mercor

MercorMercor
POLICY & REGULATION

4TB of Voice and Identity Data Stolen From 40,000 Mercor AI Contractors in Lapsus$ Breach

2026-04-27
MercorMercor
POLICY & REGULATION

Mercor Data Breach Exposes Biometrics and ID Documents, Raising Deepfake Fraud Risks

2026-04-09
MercorMercor
INDUSTRY REPORT

Skilled Older Workers Turn to AI Training as Last Resort in Brutal Job Market

2026-04-09

Comments

Suggested

AnthropicAnthropic
OPEN SOURCE

Anthropic Releases Prempti: Open-Source Guardrails for AI Coding Agents

2026-05-12
vlm-runvlm-run
OPEN SOURCE

mm-ctx: Open-Source Multimodal CLI Toolkit Brings Vision Capabilities to AI Agents

2026-05-12
MetaMeta
POLICY & REGULATION

Meta Employees Protest Mouse Tracking Technology at US Offices

2026-05-12
← Back to news
© 2026 BotBeat
AboutPrivacy PolicyTerms of ServiceContact Us