BotBeat
...
← Back

> ▌

OpenAIOpenAI
RESEARCHOpenAI2026-04-27

NYU Research Finds Users Trust AI Chatbots More When They Respond Slowly — Raising Ethical Questions

Key Takeaways

  • ▸Users perceive AI chatbot responses as higher quality when artificially delayed, interpreting the pause as 'thinking' rather than computation time
  • ▸NYU researchers recommend 'Context-Aware Latency' that matches response delays to question complexity, making moral dilemmas trigger longer waits
  • ▸The study highlights anthropomorphization of AI as a growing concern, with users attributing human-like consciousness to chatbots
Source:
Hacker Newshttps://www.machinesociety.ai/p/ai-researchers-want-ai-to-fake-thinking-247↗

Summary

A new study from NYU Tandon School of Engineering presented at CHI'26 reveals that artificial delays in AI chatbot responses make users perceive answers as more thoughtful and trustworthy, even when the delay is unrelated to the question. Researchers Felicia Fang-Yi Tan and Professor Oded Nov tested 240 adults with a chatbot providing responses with 2, 9, or 20-second delays, finding that longer response times led to higher user satisfaction—because users interpreted the delay as the AI 'thinking' or 'deliberating.'

The researchers propose implementing 'Context-Aware Latency,' where simple questions receive quick answers while complex or moral questions trigger artificial delays. They frame this as 'positive friction' and argue users will be happier believing the AI is considering their answers more carefully. However, the study also warns that users may place undue trust in slower systems if they equate response time with quality.

The findings underscore a growing concern in AI research: that users are increasingly anthropomorphizing AI chatbots, attributing human-like thinking, consciousness, and deliberation to systems that don't actually think. Critics, including the article's author Mike Elgan, argue that rather than exploiting these misconceptions through artificial delays, researchers should be educating the public about what AI actually is—a tool without consciousness—and resisting the temptation to reinforce false beliefs for commercial advantage.

  • Critics argue AI researchers should combat user misconceptions about AI sentience rather than exploit them for perceived user satisfaction

Editorial Opinion

This research exposes a troubling dynamic in AI product design: the willingness to exploit user misconceptions for perceived satisfaction. While the study's findings about user perception are scientifically valuable, the recommendation to implement artificial delays to 'fake thinking' crosses an ethical line. Rather than building genuine trust through transparency about how AI actually works, the proposed approach doubles down on anthropomorphism—a cognitive bias that threatens unhealthy emotional attachment to products and undermines public understanding of AI.

Large Language Models (LLMs)Generative AIEthics & BiasAI Safety & Alignment

More from OpenAI

OpenAIOpenAI
PARTNERSHIP

Microsoft and OpenAI's Famed AGI Agreement Is Dead

2026-04-27
OpenAIOpenAI
PRODUCT LAUNCH

OpenAI Developing 'AI Agent Phone' in Partnership with Qualcomm and MediaTek

2026-04-27
OpenAIOpenAI
PARTNERSHIP

Microsoft and OpenAI End Exclusivity Agreement, Opening Path for Multi-Cloud Future

2026-04-27

Comments

Suggested

DeepSeekDeepSeek
UPDATE

DeepSeek Slashes AI Model Pricing by 97%, Intensifying Price War with OpenAI

2026-04-27
AnthropicAnthropic
UPDATE

Anthropic Restricts Opus Model Access on Claude Pro Behind Extra Usage Paywall

2026-04-27
IntelIntel
FUNDING & BUSINESS

Former DeepMind Researcher David Silver Raises $1.1B for Ineffable Intelligence, AI Lab Building 'Superlearner' Without Human Data

2026-04-27
← Back to news
© 2026 BotBeat
AboutPrivacy PolicyTerms of ServiceContact Us