One in Seven UK Adults Prefer AI Chatbots to Doctor Visits; Study Reveals Safety Risks
Key Takeaways
- ▸15% of UK adults use AI chatbots for health advice; healthcare professionals call this 'highly concerning'
- ▸Long NHS waiting lists are a primary driver—25% of chatbot users cite wait times as their reason for seeking AI alternatives
- ▸Medical experts warn AI lacks the diagnostic capability to examine patients, understand full medical history, or make safe clinical judgments
Summary
A King's College London study of over 2,000 people has found that 15% of UK adults are using AI chatbots for health advice instead of consulting their GP, with a quarter citing long NHS waiting lists as their primary reason. The research, the first to quantify this behavior, reveals significant safety concerns: one in five chatbot users said the technology discouraged them from seeking professional care, while a similar proportion decided against medical consultations based on AI advice. Experts warn that AI chatbots lack critical clinical capabilities such as patient examination, complete medical history access, and the ability to detect subtle symptoms. The study also notes that some AI tools, including Google AI Overviews, contain false and misleading health information. Medical leaders have called for regulatory oversight, greater transparency, and investment in NHS services to ensure patients don't feel forced to rely on unregulated AI for healthcare decisions.
- One in five chatbot users reported being discouraged from seeking professional medical advice by AI responses
- Researchers call for urgent regulation, transparency, and greater investment in NHS services to restore patient trust in professional healthcare
Editorial Opinion
This study exposes a critical tension in modern healthcare: the rapid adoption of AI chatbots is outpacing regulatory frameworks and clinical validation, creating an 'unregulated AI healthcare system' alongside the NHS. While AI can provide quick answers and improve health literacy, it poses genuine risks when patients substitute it for professional medical judgment. The findings underscore that technology can only complement—not replace—clinical expertise, and that underfunded healthcare systems inadvertently drive vulnerable populations toward unproven alternatives.



