AI Models Systematically Prefer Resumes They Wrote Themselves, Research Finds
Key Takeaways
- ▸LLMs exhibit strong self-preferencing bias in hiring, favoring resumes they generated by 67–82% compared to human-written alternatives, even when content quality is controlled
- ▸In simulated hiring pipelines, candidates using the same LLM as the evaluator had 23–60% higher shortlist rates across 24 occupations, with the worst disparities in business roles
- ▸The bias creates a potential compounding disadvantage: job seekers increasingly use LLMs to write resumes while employers use LLMs to screen them, amplifying the unfairness
Summary
A new peer-reviewed research study reveals a critical bias in algorithmic hiring: large language models consistently favor resumes they generated themselves over human-written alternatives or outputs from competing models. Researchers conducted a large-scale controlled experiment across major commercial and open-source LLMs, finding self-preference bias ranging from 67% to 82%. When simulating realistic hiring pipelines across 24 occupations, candidates using the same LLM as the screening model were 23% to 60% more likely to be shortlisted than equally qualified applicants with human-written resumes—with the largest disparities in business fields like sales and accounting. The researchers demonstrated that simple interventions targeting LLMs' self-recognition capabilities could reduce this bias by more than 50%, suggesting technical solutions exist but require deliberate implementation.
- Simple technical interventions can reduce self-preferencing bias by over 50%, indicating the problem is addressable but requires proactive deployment by AI companies and hiring platforms
Editorial Opinion
This research exposes a significant blind spot in AI fairness discourse. We've focused extensively on demographic disparities, but overlooked how LLMs may discriminate based on their own authorship patterns. As AI adoption deepens on both sides of hiring decisions—job seekers refining resumes with ChatGPT or Claude while employers screen with the same tools—this self-preferencing bias could systematically disadvantage human-written applications at scale. The encouraging finding is that fixes exist and are relatively simple to implement, but they demand immediate attention from AI developers and responsible adoption by hiring platforms before the bias becomes entrenched in labor markets.



