Health chatbots are suddenly everywhere, promising round‑the‑clock answers and personalized guidance. As big names roll out tools built for clinics and consumers alike, clinicians and privacy experts are issuing a simple warning: do not tell a chatbot everything about your health. It’s not about fearmongering; it’s about understanding what these systems are and what they are not.
The appetite for digital advice is real. A recent survey by the Annenberg Public Policy Center found that 63% of respondents consider AI‑generated health information reliable, even as trust in federal health agencies like the CDC, FDA, and NIH dropped by 5–7%. That trust gap is precisely why people are oversharing symptoms and histories with systems that cannot guarantee medical accuracy or data safety.

The Privacy Gap Most Users Miss With Health Chatbots
HIPAA does not generally protect what you type into a consumer chatbot. Unless your conversation happens through a covered provider’s system or a formal business‑associate arrangement, the information you share can fall outside traditional health‑privacy rules. Many AI services state in their policies that prompts may be stored, reviewed to improve models, or shared with service partners.
Regulators have repeatedly shown how sensitive health data can leak into ad ecosystems. The Federal Trade Commission penalized GoodRx in 2023 for sharing user health information with advertising platforms and secured a settlement from BetterHelp the same year over similar practices. These cases involved health apps—not general chatbots—but they underscore how quickly “wellness” details can become marketing fuel.
Security incidents happen, too. OpenAI disclosed a 2023 bug that briefly exposed some users’ chat titles and limited billing details. Even without a breach, stored chat logs can be subject to subpoenas, accessed by contractors, or combined with other datasets. Once a deeply personal detail—an HIV status, a psychiatric diagnosis, a genetic risk—circulates beyond your control, you cannot pull it back.
AI Can Be Confidently Wrong About Medical Care
Large language models excel at fluent summaries, not clinical judgment. They generate the most statistically plausible text based on patterns in data, which means they can sound authoritative while being wrong or incomplete. That’s a dangerous combination when the question is “Do I need urgent care?”
In a 2024 study published in Nature, ChatGPT undertriaged more than half of high‑risk emergency scenarios, nudging users toward delayed evaluations rather than emergency departments. Researchers warned of missed crises and inconsistent safety guardrails. The World Health Organization has likewise urged caution, stressing that generative AI should undergo rigorous validation before guiding clinical decisions.

Small Details Can Dramatically Change Diagnoses
Medicine turns on context: timing, medications, family history, vital signs, exposures, and rare but critical exceptions. A chatbot cannot examine you, perform a physical, order tests, or synthesize subtle contradictions. A missed detail—like new unilateral leg swelling with chest discomfort—can transform a benign explanation into a life‑threatening one.
The temptation is to “tell it everything” in hopes of a better answer. But even exhaustive prompts are no substitute for a trained clinician. Overdisclosure raises privacy risks without guaranteeing accuracy, and the polished output can create false confidence that delays necessary care.
When Chatbots Can Help Without Oversharing
Used wisely, AI can be a springboard for learning—not a diagnostic engine. It’s well suited for general wellness education, meal ideas after a celiac diagnosis, or creating a beginner‑friendly exercise plan. It can also help you prepare for appointments by drafting questions, summarizing guidelines from credible organizations, or translating medical jargon into plain language.
The key is to keep prompts generic and avoid identifying details. You do not need your full medical history to ask, “What lifestyle factors typically worsen GERD?” or “What questions should I ask a dermatologist about a changing mole?” Let your clinician tailor those answers to your body and your risks.
How To Protect Yourself If You Do Use Them
- Stick to nonidentifiable, high‑level questions.
- Leave out names, dates of birth, addresses, policy numbers, and specific test results.
- Avoid uploading images of prescriptions, lab reports, or your face.
- Check whether the service lets you turn off chat history or opt out of training.
- If possible, use tools offered by your healthcare provider that are explicitly covered by HIPAA.
Understand the limits. Microsoft, Google, OpenAI, and Anthropic are investing in clinician‑facing systems—and some, like Microsoft’s Copilot Health, are designed for secure workflows inside health organizations. That is not the same as a public chatbot on your phone. Until consumer tools are validated for diagnosis and triage, treat their output as starting points, not final answers.
The bottom line is simple: chatbots make handy tutors, not confidants. Use them to get oriented, then bring your questions—and your privacy—to a licensed professional. If something feels urgent or serious, skip the prompt and seek care.
