NYU Research Finds Users Trust AI Chatbots More When They Respond Slowly — Raising Ethical Questions
Key Takeaways
- ▸Users perceive AI chatbot responses as higher quality when artificially delayed, interpreting the pause as 'thinking' rather than computation time
- ▸NYU researchers recommend 'Context-Aware Latency' that matches response delays to question complexity, making moral dilemmas trigger longer waits
- ▸The study highlights anthropomorphization of AI as a growing concern, with users attributing human-like consciousness to chatbots
Summary
A new study from NYU Tandon School of Engineering presented at CHI'26 reveals that artificial delays in AI chatbot responses make users perceive answers as more thoughtful and trustworthy, even when the delay is unrelated to the question. Researchers Felicia Fang-Yi Tan and Professor Oded Nov tested 240 adults with a chatbot providing responses with 2, 9, or 20-second delays, finding that longer response times led to higher user satisfaction—because users interpreted the delay as the AI 'thinking' or 'deliberating.'
The researchers propose implementing 'Context-Aware Latency,' where simple questions receive quick answers while complex or moral questions trigger artificial delays. They frame this as 'positive friction' and argue users will be happier believing the AI is considering their answers more carefully. However, the study also warns that users may place undue trust in slower systems if they equate response time with quality.
The findings underscore a growing concern in AI research: that users are increasingly anthropomorphizing AI chatbots, attributing human-like thinking, consciousness, and deliberation to systems that don't actually think. Critics, including the article's author Mike Elgan, argue that rather than exploiting these misconceptions through artificial delays, researchers should be educating the public about what AI actually is—a tool without consciousness—and resisting the temptation to reinforce false beliefs for commercial advantage.
- Critics argue AI researchers should combat user misconceptions about AI sentience rather than exploit them for perceived user satisfaction
Editorial Opinion
This research exposes a troubling dynamic in AI product design: the willingness to exploit user misconceptions for perceived satisfaction. While the study's findings about user perception are scientifically valuable, the recommendation to implement artificial delays to 'fake thinking' crosses an ethical line. Rather than building genuine trust through transparency about how AI actually works, the proposed approach doubles down on anthropomorphism—a cognitive bias that threatens unhealthy emotional attachment to products and undermines public understanding of AI.



