Study Reveals Significant Perception Gap Between AI Experts and Public on Risks and Benefits
Key Takeaways
- ▸Academic experts perceive higher AI likelihoods, lower risks, and greater benefits than the general public across 71 scenarios
- ▸Experts prioritize perceived benefits in AI evaluation (β = 0.623) while the public prioritizes risk concerns (β = 0.703)
- ▸Divergence is greatest in high-stakes domains like justice and political decision-making; convergence appears in medical and criminal-use scenarios
Summary
A new peer-reviewed study published in AI & SOCIETY journal reveals a substantial divergence between how AI experts and the general public perceive artificial intelligence's risks, benefits, and overall value. Researchers examined the mental models of 1,110 German public participants and 119 academic AI experts across 71 AI scenarios spanning domains including healthcare, employment, sustainability, inequality, art, and warfare.
The research found that academic experts consistently anticipated higher probabilities of AI occurrence, perceived lower risks, and reported greater benefits compared to the public. Crucially, the two groups demonstrated fundamentally different decision-making frameworks: experts' evaluations were primarily driven by perceived benefits (β = 0.623), while the public weighted perceived risks significantly more heavily (β = 0.703).
The study identifies specific convergence points (medical diagnoses, criminal use detection) and tension areas (justice systems, political decision-making) where expert-public views diverge most sharply. The authors argue that current research and implementation practices may inadvertently create what they call "procrustean AI"—systems insufficiently aligned with the risk-related priorities of affected publics. They advocate for more participatory approaches in AI governance and development.
- Expert-centric research and implementation practices may create systems misaligned with public risk priorities, undermining societal acceptance of AI
Editorial Opinion
This research exposes a critical blind spot in AI governance: expert and public stakeholders inhabit fundamentally different mental models of AI risk and benefit. The quantified weight disparity—with publics assigning 0.703 importance to risk versus experts' 0.361—is striking and suggests current participatory mechanisms are insufficient. If development and deployment agendas continue to be dominated by expert perspectives that undervalue public risk concerns, the result will be AI systems that are technically sound but societally misaligned. The paper makes a compelling case that genuine co-design, not mere consultation, is necessary to bridge this divide.


