BotBeat
...
← Back

> ▌

AnthropicAnthropic
RESEARCHAnthropic2026-04-21

Study Reveals Flattery and Friendliness Matter More Than Competence in Human-AI Trust

Key Takeaways

  • ▸Warmth and friendliness have significantly greater impact on anthropomorphism and perceived personality than technical competence in LLM interactions
  • ▸Subjective or personally meaningful conversation topics (relationships, lifestyle) increase users' sense of connection and emotional closeness to chatbots
  • ▸Excessive friendliness without substantive competence can create 'superficial agreeableness' that feels fake and may lead to user over-trust and susceptibility to manipulation
Source:
Hacker Newshttps://www.theregister.com/2026/04/20/chatbots_win_trust_by_sounding/↗

Summary

A new research study published by Anthropic researchers examines how humans form impressions of and trust large language models, analyzing over 2,000 interactions between 115 participants and chatbots. The findings challenge assumptions about what drives user engagement with AI systems: warmth and friendliness significantly outweigh competence in determining whether users anthropomorphize chatbots and attribute human-like qualities to them. While competence drives perceptions of usefulness and trust in task performance, the "friendliness factor" is what makes AI systems feel human—sometimes creating an illusion of understanding that may not actually exist. The research reveals a potential risk: users may over-trust systems that present themselves as warm and personable while lacking the substantive capabilities to back up that persona, making them susceptible to manipulation or deception.

  • Users unconsciously fill in missing competence and intent when chatbots present themselves as warm and understanding, meaning the AI's presentation matters more than its underlying capabilities

Editorial Opinion

This research raises important questions about the design ethics of conversational AI. While warmth and personability improve user experience, the study suggests a concerning disconnect: companies can make systems feel more human without actually making them smarter or more reliable. The risk of engineered trust—where users attribute competence and intentionality to systems designed to seem friendly rather than accurate—deserves serious attention from both AI developers and regulators concerned with consumer protection.

Natural Language Processing (NLP)AI AgentsEthics & BiasAI Safety & Alignment

More from Anthropic

AnthropicAnthropic
FUNDING & BUSINESS

Amazon to Invest Up to $25 Billion in Anthropic Amid $100 Billion Cloud Commitment

2026-04-21
AnthropicAnthropic
PRODUCT LAUNCH

Anthropic Launches Live Artifacts for Claude, Enabling Real-Time Creation and Sharing of AI-Generated Apps and Tools

2026-04-21
AnthropicAnthropic
PRODUCT LAUNCH

Anthropic Announces Built with Opus 4.6 Claude Code Hackathon Winners

2026-04-21

Comments

Suggested

Research Institution / Academic (Darwin Gödel Machine)Research Institution / Academic (Darwin Gödel Machine)
RESEARCH

AI Wargaming and Nuclear Conflict: New Research Explores De-Escalation Challenges

2026-04-21
Adobe (Firefly)Adobe (Firefly)
PRODUCT LAUNCH

Adobe Unveils Agents for Businesses Amid Threat of AI Disruption

2026-04-21
Malus.shMalus.sh
RESEARCH

Malus.sh Exposes Legal Loophole: AI Tool Clones Open Source Software to Circumvent Copyright Licenses

2026-04-21
← Back to news
© 2026 BotBeat
AboutPrivacy PolicyTerms of ServiceContact Us