New NBER Research Warns of 'Knowledge Collapse' Risk From Agentic AI Systems
Key Takeaways
- ▸Agentic AI systems, despite improving immediate decision quality, can erode long-term collective knowledge by reducing human incentives to contribute to shared general understanding
- ▸Economies can experience 'knowledge collapse' where general knowledge vanishes entirely once agentic AI recommendations exceed critical accuracy thresholds
- ▸Societal welfare is maximized at an interior level of agentic AI precision rather than at the highest possible accuracy, suggesting regulatory limits may improve outcomes
Summary
A new working paper from the National Bureau of Economic Research, authored by economists Daron Acemoglu, Dingwen Kong, and Asuman Ozdaglar, presents a dynamic economic model examining how agentic AI systems may undermine human learning incentives and degrade society's long-term knowledge base. The research argues that while agentic AI can improve near-term decision quality by providing highly accurate, context-specific recommendations, it simultaneously reduces human motivation to generate and share general knowledge—creating a "knowledge collapse" scenario where collective understanding erodes despite access to superior personalized advice.
The model demonstrates that human learning produces two complementary outputs: private context-specific knowledge and public general knowledge that accumulates across society. Agentic AI substitutes for the private learning effort while inadvertently reducing contributions to shared general knowledge. The authors warn that when agentic AI recommendations exceed a critical accuracy threshold and human effort is sufficiently elastic, the entire system can tip into a steady state of knowledge collapse where general knowledge vanishes entirely. The research reveals that welfare does not monotonically improve with AI accuracy—instead, there exists an optimal, interior level of agentic precision that maximizes societal welfare.
To address these risks, the authors propose two policy approaches: information-design regulations that limit agentic AI precision to welfare-optimal levels, and investments in infrastructure for better aggregation and pooling of human-generated general knowledge, which they show unambiguously raises welfare and increases resilience to knowledge collapse.
- Infrastructure for better aggregation and sharing of human-generated knowledge offers a complementary policy approach that unambiguously improves welfare without constraining AI
Editorial Opinion
This research presents a sobering economic analysis of a poorly understood risk: that the optimization of individual decision-making through agentic AI could paradoxically harm society's collective knowledge base. The non-monotonic relationship between AI accuracy and welfare is particularly striking and challenges the common assumption that 'more accurate AI is always better.' If validated empirically, these findings would justify careful regulatory consideration of agentic AI deployment, particularly around information design and the preservation of human knowledge-generation incentives.


