BotBeat
...
← Back

> ▌

OpenAIOpenAI
INDUSTRY REPORTOpenAI2026-05-14

One in Seven UK Adults Prefer AI Chatbots to Doctor Visits; Study Reveals Safety Risks

Key Takeaways

  • ▸15% of UK adults use AI chatbots for health advice; healthcare professionals call this 'highly concerning'
  • ▸Long NHS waiting lists are a primary driver—25% of chatbot users cite wait times as their reason for seeking AI alternatives
  • ▸Medical experts warn AI lacks the diagnostic capability to examine patients, understand full medical history, or make safe clinical judgments
Source:
Hacker Newshttps://www.theguardian.com/society/2026/may/13/one-in-seven-prefer-ai-chatbots-to-seeing-doctor-uk-study↗

Summary

A King's College London study of over 2,000 people has found that 15% of UK adults are using AI chatbots for health advice instead of consulting their GP, with a quarter citing long NHS waiting lists as their primary reason. The research, the first to quantify this behavior, reveals significant safety concerns: one in five chatbot users said the technology discouraged them from seeking professional care, while a similar proportion decided against medical consultations based on AI advice. Experts warn that AI chatbots lack critical clinical capabilities such as patient examination, complete medical history access, and the ability to detect subtle symptoms. The study also notes that some AI tools, including Google AI Overviews, contain false and misleading health information. Medical leaders have called for regulatory oversight, greater transparency, and investment in NHS services to ensure patients don't feel forced to rely on unregulated AI for healthcare decisions.

  • One in five chatbot users reported being discouraged from seeking professional medical advice by AI responses
  • Researchers call for urgent regulation, transparency, and greater investment in NHS services to restore patient trust in professional healthcare

Editorial Opinion

This study exposes a critical tension in modern healthcare: the rapid adoption of AI chatbots is outpacing regulatory frameworks and clinical validation, creating an 'unregulated AI healthcare system' alongside the NHS. While AI can provide quick answers and improve health literacy, it poses genuine risks when patients substitute it for professional medical judgment. The findings underscore that technology can only complement—not replace—clinical expertise, and that underfunded healthcare systems inadvertently drive vulnerable populations toward unproven alternatives.

Generative AIHealthcareRegulation & PolicyAI Safety & Alignment

More from OpenAI

OpenAIOpenAI
RESEARCH

ChatGPT Safety Failure: Chatbot Provided Tactical Advice for Mass Shooting Planning

2026-05-14
OpenAIOpenAI
RESEARCH

Researchers Discover Steganographic Data Exfiltration Vulnerability in Vector Embedding Systems

2026-05-14
OpenAIOpenAI
RESEARCH

Research Finds Limited Evidence of AI Reducing Job Postings Despite Broader Hiring Slowdown

2026-05-14

Comments

Suggested

Emergence AIEmergence AI
RESEARCH

Emergence AI's Virtual Experiment Exposes Critical Safety Gaps in Autonomous Agents

2026-05-14
Micron TechnologyMicron Technology
PRODUCT LAUNCH

Micron Unveils 256 GB DDR5 Memory Module for AI Infrastructure

2026-05-14
AnthropicAnthropic
RESEARCH

Anthropic Warns of Critical AI Competition Between US and China, Urges Defense of Compute Advantage

2026-05-14
← Back to news
© 2026 BotBeat
AboutPrivacy PolicyTerms of ServiceContact Us