AI now sits in the waiting room with us. Patients bring chatbot printouts to appointments, clinicians lean on algorithms to tame paperwork, and trust in the system keeps sliding. In a recent Annenberg Public Policy Center survey, confidence in federal health agencies such as the CDC, FDA, and NIH fell by 5–7%, even as 63% of respondents said AI-generated health information seems reliable. That tension is exactly what a practicing family physician, Dr. Alexa Mieses Malchuk, wrestles with every day.
Her verdict in brief: AI is a powerful assistant, not a diagnostician. Used well, it smooths the path to better care. Used poorly, it can lull people into false certainty and delayed treatment. Here is the good, the bad, and the ugly — from a doctor who actually uses these tools.
The Good: What AI Gets Right In Clinics And Homes
On the care-team side, generative AI helps with the grind. Dr. Mieses Malchuk uses it to triage routine portal messages, draft anticipatory guidance, and structure visit notes — the kinds of tasks that steal time from patients. Research in Annals of Internal Medicine has found physicians spend nearly two hours on electronic records and desk work for every hour of direct patient care, so even small automation wins matter.
Big tech is leaning in. Google, OpenAI, and Anthropic are training health-oriented models for professional use, while Amazon and Google recently unveiled tools aimed at scheduling, clinical documentation, and medical coding. Wearable makers are experimenting too; Oura introduced an early women’s health model built on clinical research, and industry chatter suggests Apple is exploring its own health-focused AI features.
For consumers, AI shines at wellness coaching. Ask for a celiac-friendly meal plan, a progressive strength routine, or tips to make CPAP therapy more tolerable, and it can produce organized, personalized suggestions in seconds. As a “conversation starter,” it can also help patients prepare for visits by summarizing symptoms and listing questions to ask — a habit doctors often wish more people had.
The Bad: Why Consumer Chatbots Mislead Patients
Good answers require good inputs — and most people aren’t trained to supply a clinically relevant history. Dr. Mieses Malchuk sees patients arrive with polished chatbot explanations that miss crucial context, like medication doses, timing of symptoms, or family risk. The model sounds sure of itself, and that confidence can be contagious.
Safety data back up her caution. A study in Nature evaluating AI triage found that ChatGPT undertriaged more than 50% of high-acuity scenarios, sometimes advising 24–48-hour follow-up instead of an immediate emergency department visit. The authors flagged inconsistent crisis safeguards and urged prospective validation before broad consumer deployment.
Even when the direction is broadly correct, nuance gets lost. Two problems with similar symptoms — say, indigestion and cardiac ischemia — can look identical to a model that never examined the patient, took vital signs, or reviewed an EKG. That’s why doctors bristle at definitive chatbot language: medicine rarely offers 100% certainty, and premature certainty can be dangerous.
The Ugly: Safety Bias And Accountability Gaps
AI systems learn from historical data, and historical data reflect historical bias. If certain groups were underdiagnosed or undertreated in the past, models can inadvertently echo those patterns. The World Health Organization has warned that bias, security vulnerabilities, and opaque training data can entrench inequities if guardrails are weak.
Privacy is another sore spot. HIPAA protects health information held by covered entities, but many consumer apps and chatbots sit outside that umbrella. Sharing detailed symptoms, images, or identifiers with a general-purpose tool may expose sensitive data in ways patients don’t anticipate. Meanwhile, liability is murky: if a chatbot downplays red flags and harm follows, who is responsible — the developer, the clinic that embedded the tool, or the user who followed advice?
Regulators are trying to catch up. The FDA has cleared numerous AI-enabled tools, especially in imaging, under well-established medical device pathways, and has outlined a risk-based approach for software that influences clinical decisions. But consumer-facing chatbots that offer health guidance without being marketed as medical devices still live in a gray zone.
How To Use AI Wisely: A Doctor’s Playbook
- Use AI as a springboard, not a replacement. Let it help you organize thoughts, draft wellness plans, and assemble questions for your clinician. Treat outputs as hypotheses to discuss, not conclusions to act on.
- Bring the summary to your appointment. Doctors can quickly spot missing context, correct inaccuracies, and convert a rough draft into a safe, personalized plan. In Dr. Mieses Malchuk’s clinic, that collaboration saves time and improves shared decision-making.
- Be cautious with urgent or uncertain symptoms. If something feels serious or rapidly worsening, seek in-person care. No chatbot can examine you, run tests, or assume legal responsibility for missed emergencies.
- And remember the trust gap cuts both ways. As public confidence in institutions dips and AI’s allure grows, clinicians who use these tools transparently — and explain their limits — can rebuild credibility. That may be AI’s most underrated role in healthcare today: not as the final word, but as a better way to start the conversation.