Google Quietly Scraps 'What People Suggest' AI Feature Amid Health Misinformation Concerns
Key Takeaways
- ▸Google has scrapped 'What People Suggest,' a feature that aggregated crowdsourced health advice from non-expert users, citing a 'broader simplification' of search rather than safety concerns
- ▸The feature removal reflects growing pressure on Google over its health AI initiatives, particularly following a Guardian investigation exposing false medical information in AI Overviews
- ▸Google's AI Overviews reach 2 billion monthly users and have been criticized for providing misleading health guidance, prompting partial rollbacks but continued deployment on many medical queries
Summary
Google has discontinued its "What People Suggest" AI search feature, which crowdsourced health advice from amateur users worldwide. The company had launched the feature in March 2024 as part of its expanded medical AI capabilities, positioning it as a way to help users find insights from people with similar lived medical experiences. However, the feature has been quietly removed as part of what Google described as a "broader simplification" of its search interface.
The discontinuation comes amid intensifying scrutiny over Google's use of AI to provide health information to billions of users. In January 2025, The Guardian revealed that Google's AI Overviews—AI-generated summaries shown to 2 billion people monthly—were surfacing false and misleading health information that could put users at risk. Following the investigation, Google removed AI Overviews for some medical queries but maintained the feature for others, drawing criticism from independent experts concerned about the quality and safety of AI-generated medical guidance.
Editorial Opinion
Google's decision to quietly retire 'What People Suggest' reveals the company's struggle to balance AI innovation with genuine public health safety. While framing the removal as a routine simplification rather than acknowledging safety concerns, Google appears to be acknowledging—at least implicitly—the risks of amplifying non-expert medical advice at scale. This incident underscores a troubling pattern: Google launches health AI features with significant reach, faces public backlash over misinformation, then retreats without transparent accountability. Users deserve clear communication about the limitations and risks of AI-generated health information, not quiet feature removals.


