From Hollywood to the Prompt: Why Writers Are Training AI
Key Takeaways
- ▸Hollywood writers and creatives are pivoting to AI data annotation and training work, driven by production collapse and financial desperation
- ▸AI training companies require extensive unpaid labor for screening and testing, despite offering premium hourly rates
- ▸The work is diverse and specialized—image annotation, red-teaming, safety testing—but emotionally taxing and poorly managed
Summary
A Hollywood writer-turned-AI-trainer reveals how unemployed creatives are turning to AI data annotation work to survive amid entertainment industry stagnation. Following the 2023 AI strike and a subsequent production collapse in early 2025, entertainment workers are migrating to platforms like Mercor, Outlier, and others to label data, annotate images and video, and red-team AI models for safety testing. The work offers seemingly attractive hourly rates ($52–$150), but requires extensive unpaid screening and testing, complex task management systems, and emotionally taxing assignments—including generating harmful content and misinformation to test AI safeguards. The trend underscores broader challenges in AI training labor, where skilled professionals from creative industries are being leveraged to build and evaluate AI systems at a fraction of their previous earnings.
- The trend raises concerns about who is training AI systems and under what conditions, with implications for AI safety and bias
Editorial Opinion
The migration of creative professionals into AI training work reveals uncomfortable truths about how AI systems are being built. While companies like Mercor tout competitive hourly rates, the hidden costs—unpaid screening hours, byzantine tools, and emotional burden—amount to substantial wage suppression. More critically, entrusting critical AI safety work to overworked, underpaid contractors recruited from a desperate talent pool may be compromising the quality of safeguards that increasingly shape public discourse.



