Research Reveals 'Intuition Rust': How AI Amplification Paradoxically Erodes Expert Skills in High-Stakes Work
Key Takeaways
- ▸AI systems can mask skill erosion through initial productivity gains, creating asymptomatic harms that only become visible over extended use
- ▸The study documents 'intuition rust'—gradual dulling of expert judgment—as a chronic consequence of AI dependence in high-stakes professional work
- ▸Researchers propose a framework for dignified human-AI interaction that protects worker expertise and identity while maintaining productivity benefits
Summary
A year-long study presented at CHI 2026 reveals a hidden cost of AI adoption in the workplace: while AI systems boost productivity, they simultaneously erode the underlying expertise and judgment of knowledge workers. Researchers studying cancer specialists found that initial operational gains masked "intuition rust"—a gradual dulling of expert intuition and decision-making ability. Over time, these asymptomatic effects evolved into chronic harms including skill atrophy, loss of professional identity, and commoditization of workers' knowledge without adequate labor protections.
The research identifies the "AI-as-Amplifier Paradox," where productivity tools that enhance performance simultaneously weaken the deep expertise that made workers valuable in the first place. The study documents how professionals relying on AI assistance gradually lose confidence in their independent judgment, creating a dependency that leaves them vulnerable to skill degradation. This phenomenon was observed across healthcare and software engineering sectors, suggesting it is a systemic challenge rather than isolated to any single industry.
To address these concerns, researchers developed a framework for dignified human-AI interaction that balances productivity gains with worker protection. The framework introduces "sociotechnical immunity" mechanisms designed to help workers detect, contain, and recover from skill erosion while preserving professional identity and building worker power in AI-augmented environments. The approach recognizes that sustainable AI adoption requires safeguards that protect human expertise alongside organizational efficiency.
- Protection mechanisms must be built into AI workflows from the start, as traditional labor protections often do not apply to AI-induced skill erosion
Editorial Opinion
This research challenges the uncritical enthusiasm surrounding AI as a workplace productivity tool by documenting real harms to worker agency and expertise that remain invisible in efficiency metrics. The findings suggest that organizations adopting AI must move beyond measuring operational gains to consider the long-term human cost of deskilling and professional erosion. The proposed framework for "sociotechnical immunity" is a thoughtful step toward responsible AI deployment, but its success will depend on whether employers view worker skill preservation as a feature rather than an obstacle to cost reduction.



