Google has shut down its experimental What People Suggest feature in Search, a short-lived AI module that summarized health tips sourced from forums and social media. First reported by The Guardian, the quiet rollback ends an approach that blended symptom searches with crowdsourced anecdotes — and raised alarms among clinicians and safety researchers.
How the Experiment Worked to Summarize Health Advice
What People Suggest functioned like a digest of lived experiences. When users searched for common ailments or lab results, the system surfaced patterns and snippets from public conversations — threads on platforms such as Reddit, patient communities, and Q&A sites — then distilled them into a tidy panel inside Search.
On paper, the idea promised convenience: fewer clicks and faster context from people who had “been there.” In practice, it blurred an essential line. Advice drawn from anecdotes lacks clinical rigor, and AI summaries can strip away the caveats and specifics that make a story safe for one person but risky for another.
Google’s Rationale and the Real Tension in Search
Google says the removal is part of simplifying Search, not a response to accuracy or safety concerns. Yet the timing is hard to ignore. The company has already faced scrutiny for AI Overviews that delivered questionable medical guidance on certain queries, prompting targeted pullbacks for sensitive topics such as interpreting liver test results.
Beyond reputational risk, regulatory pressure is intensifying. The European Union’s Digital Services Act demands risk mitigation for systemic harms, including health misinformation, and consumer protection agencies in multiple countries have flagged the dangers of automated health claims. With Search commanding roughly 90% of the global market, even small error rates can scale to millions of exposures.
Why Crowd Wisdom Fails in Health and Patient Safety
Health guidance is highly contextual. Two people with identical symptoms can require different care based on age, medications, pregnancy status, comorbidities, or recent procedures. Anecdotes tend to omit these variables, and AI summarization can further compress nuance, turning isolated experiences into seemingly generalized recommendations.
Medical organizations have warned about this dynamic for years. The World Health Organization has urged platforms to manage “infodemics,” noting that misleading health content spreads faster than corrections. Peer-reviewed evaluations of AI chatbots and large language models in clinical scenarios, including analyses published in journals like JAMA and BMJ, show uneven performance — sometimes helpful, sometimes hazardous — especially without strict guardrails and expert review.
Even when technically accurate, advice can be incomplete. For example, a crowdsourced tip might suggest a home remedy that is harmless for most but contraindicated for people on blood thinners. Summarized without warnings, that tip becomes a risk vector rather than a shortcut.
What Users Will See Now in Health-Related Search Results
With What People Suggest gone, symptom and condition searches will lean more heavily on conventional results and AI modules trained to prioritize authoritative sources. Expect more emphasis on established medical organizations, peer-reviewed references, and content that satisfies Google’s own trust frameworks for sensitive topics — often described as experience, expertise, authoritativeness, and trustworthiness.
Personal narratives are not disappearing from the web; they are simply less likely to be summarized directly by AI in Search. Users looking for lived experience can still find forums and support groups, but they will have to click through, read in context, and, ideally, cross-check with clinicians.
Implications for AI in Search and Health Information
Retreating from crowdsourced medical summaries does not signal an AI pullback across the board. Instead, it points to a more disciplined playbook: limit AI generation on high-stakes queries, invest in retrieval from vetted sources, and constrain output with stronger safeguards. Inside Google’s health research, models like Med-PaLM have been piloted in controlled settings with clinician oversight — a stark contrast to summarizing internet anecdotes for general searchers.
The core lesson is straightforward. Convenience cannot outrun clinical reliability when the subject is health. For search providers, that means building systems that resist the lure of viral advice and, when in doubt, elevate expert guidance over crowd consensus. For users, it reinforces a perennial rule: treat internet health tips as starting points, not prescriptions.