PhDs and Experts Earn $16/Hour Training AI Systems That Replace Their Own Jobs
Key Takeaways
- ▸Highly educated workers including PhDs are being paid $16-$45/hour to label training data for AI systems, far below market rates for their expertise
- ▸Workers often don't know whose AI they're training, lack job security, and can have projects canceled abruptly without warning or compensation
- ▸The practice represents a paradox where AI that eliminates professional jobs requires those same professionals to train it at poverty wages
Summary
A new investigative report reveals the troubling reality of AI data labeling work, where highly educated professionals—including PhDs and experts in their fields—are being hired at poverty wages to train the very AI systems that have eliminated their careers. Workers like Katya, a freelance journalist turned content marketer whose job was automated by ChatGPT, find themselves accepting $16-$45 per hour gigs to label training data for AI models, often without knowing whose AI they're training or for what purpose. Companies like Mercor recruit these workers through deceptive job postings and require them to install monitoring software while offering no job security, as projects can be abruptly canceled with no warning or severance.
The work involves creating training datasets by writing example prompts and ideal chatbot responses, evaluating AI conversations, and defining quality criteria—labor that is fundamental to building modern large language models. Despite the critical importance of this work to AI development, the compensation is shockingly low relative to workers' qualifications and the value they're creating. The situation highlights a systemic tension in AI development: the industry requires massive amounts of human expertise to train its models, yet it treats those workers as disposable labor with minimal protections or transparency.
- Companies like Mercor use deceptive recruiting practices and monitoring software while offering no transparency about project purpose or duration
Editorial Opinion
This story exposes a morally troubling underbelly of the AI boom: the exploitation of skilled workers who are forced to participate in their own technological obsolescence. While AI companies reap enormous valuations from models trained on human expertise, the workers providing that expertise are treated as interchangeable commodities. The lack of transparency, job security, and fair compensation for workers training frontier AI models raises serious questions about the sustainability and ethics of current AI development practices.



