Study Reveals AI Systems Make Human-Like Trust Judgments, But With Critical Differences
Key Takeaways
- ▸AI systems demonstrate understanding of trust fundamentals (competence, integrity, benevolence) but apply them through rigid, rule-based logic rather than holistic human intuition
- ▸AI judgments are more extreme and show consistent amplification of demographic biases compared to human decision-makers, particularly in financial contexts
- ▸Different AI models produce significantly varying judgment patterns, raising concerns about unpredictability and inconsistency in deployment
Summary
A new study by researchers at Hebrew University examining over 43,000 simulated decisions reveals that advanced AI systems, including those similar to ChatGPT and Google's Gemini, make trust-based judgments about people in ways that superficially resemble human decision-making. The research found that both humans and AI favor individuals displaying competence, integrity, and benevolence—the core ingredients of trust. However, the similarities end there: AI systems approach judgment in a rigid, rule-based manner, breaking people down into discrete components scored separately rather than forming holistic impressions like humans do.
The study identified a troubling pattern: while AI mimics the structure of human judgment, it operates with more extreme and consistently biased outcomes across demographic traits, particularly in financial scenarios. Different AI models also showed significant variations in their judgment patterns. Researchers conclude that although AI captures something real about how humans evaluate one another, the machines do not think like humans, and this gap has serious implications for real-world applications in hiring, lending, and other consequential decisions.
- The structural similarity between AI and human judgments masks fundamental differences in cognitive approach, with important consequences for high-stakes applications
Editorial Opinion
This research highlights a critical risk in the increasing deployment of AI systems for consequential decisions: the surface-level resemblance to human judgment may create false confidence in their fairness and accuracy. The finding that AI amplifies rather than moderates demographic biases, while operating through opaque rule-based systems, suggests that organizations must move beyond assuming AI is neutral simply because it mimics human reasoning. The variation between models further underscores the need for rigorous testing and transparency before deploying these systems in hiring, lending, and other high-stakes domains.



