BotBeat
...
← Back

> ▌

MercorMercor
INDUSTRY REPORTMercor2026-03-12

PhDs and Experts Earn $16/Hour Training AI Systems That Replace Their Own Jobs

Key Takeaways

  • ▸Highly educated workers including PhDs are being paid $16-$45/hour to label training data for AI systems, far below market rates for their expertise
  • ▸Workers often don't know whose AI they're training, lack job security, and can have projects canceled abruptly without warning or compensation
  • ▸The practice represents a paradox where AI that eliminates professional jobs requires those same professionals to train it at poverty wages
Source:
Hacker Newshttps://nymag.com/intelligencer/article/white-collar-workers-training-ai.html↗

Summary

A new investigative report reveals the troubling reality of AI data labeling work, where highly educated professionals—including PhDs and experts in their fields—are being hired at poverty wages to train the very AI systems that have eliminated their careers. Workers like Katya, a freelance journalist turned content marketer whose job was automated by ChatGPT, find themselves accepting $16-$45 per hour gigs to label training data for AI models, often without knowing whose AI they're training or for what purpose. Companies like Mercor recruit these workers through deceptive job postings and require them to install monitoring software while offering no job security, as projects can be abruptly canceled with no warning or severance.

The work involves creating training datasets by writing example prompts and ideal chatbot responses, evaluating AI conversations, and defining quality criteria—labor that is fundamental to building modern large language models. Despite the critical importance of this work to AI development, the compensation is shockingly low relative to workers' qualifications and the value they're creating. The situation highlights a systemic tension in AI development: the industry requires massive amounts of human expertise to train its models, yet it treats those workers as disposable labor with minimal protections or transparency.

  • Companies like Mercor use deceptive recruiting practices and monitoring software while offering no transparency about project purpose or duration

Editorial Opinion

This story exposes a morally troubling underbelly of the AI boom: the exploitation of skilled workers who are forced to participate in their own technological obsolescence. While AI companies reap enormous valuations from models trained on human expertise, the workers providing that expertise are treated as interchangeable commodities. The lack of transparency, job security, and fair compensation for workers training frontier AI models raises serious questions about the sustainability and ethics of current AI development practices.

Generative AIData Science & AnalyticsEthics & BiasJobs & Workforce Impact

More from Mercor

MercorMercor
PRODUCT LAUNCH

Mercor Launches Retroactive Payment Program for AI Training Work, Addressing IP Ownership Concerns

2026-04-03
MercorMercor
POLICY & REGULATION

Mercor Faces Class Action Lawsuit Over Supply Chain Attack Exposing 40,000 Users' Personal Data

2026-04-03
MercorMercor
POLICY & REGULATION

Mercor AI Hit by Security Breach Through LiteLLM Vulnerability

2026-04-02

Comments

Suggested

AnthropicAnthropic
RESEARCH

Inside Claude Code's Dynamic System Prompt Architecture: Anthropic's Complex Context Engineering Revealed

2026-04-05
OracleOracle
POLICY & REGULATION

AI Agents Promise to 'Run the Business'—But Who's Liable When Things Go Wrong?

2026-04-05
GitHubGitHub
PRODUCT LAUNCH

GitHub Launches Squad: Open Source Multi-Agent AI Framework to Simplify Complex Workflows

2026-04-05
← Back to news
© 2026 BotBeat
AboutPrivacy PolicyTerms of ServiceContact Us