FindArticles FindArticles
  • News
  • Technology
  • Business
  • Entertainment
  • Science & Health
  • Knowledge Base
FindArticlesFindArticles
Font ResizerAa
Search
  • News
  • Technology
  • Business
  • Entertainment
  • Science & Health
  • Knowledge Base
Follow US
  • Contact Us
  • About Us
  • Write For Us
  • Privacy Policy
  • Terms of Service
FindArticles © 2025. All Rights Reserved.
FindArticles > News > Science & Health

Doctor Using AI Outlines Healthcare Promise And Perils

Pam Belluck
Last updated: March 10, 2026 2:01 am
By Pam Belluck
Science & Health
7 Min Read
SHARE

AI now sits in the waiting room with us. Patients bring chatbot printouts to appointments, clinicians lean on algorithms to tame paperwork, and trust in the system keeps sliding. In a recent Annenberg Public Policy Center survey, confidence in federal health agencies such as the CDC, FDA, and NIH fell by 5–7%, even as 63% of respondents said AI-generated health information seems reliable. That tension is exactly what a practicing family physician, Dr. Alexa Mieses Malchuk, wrestles with every day.

Her verdict in brief: AI is a powerful assistant, not a diagnostician. Used well, it smooths the path to better care. Used poorly, it can lull people into false certainty and delayed treatment. Here is the good, the bad, and the ugly — from a doctor who actually uses these tools.

Table of Contents
  • The Good: What AI Gets Right In Clinics And Homes
  • The Bad: Why Consumer Chatbots Mislead Patients
  • The Ugly: Safety Bias And Accountability Gaps
  • How To Use AI Wisely: A Doctor’s Playbook
A doctor in a futuristic medical setting examining a patient lying on a high-tech bed, with holographic displays showing medical data and a robotic assistant nearby.

The Good: What AI Gets Right In Clinics And Homes

On the care-team side, generative AI helps with the grind. Dr. Mieses Malchuk uses it to triage routine portal messages, draft anticipatory guidance, and structure visit notes — the kinds of tasks that steal time from patients. Research in Annals of Internal Medicine has found physicians spend nearly two hours on electronic records and desk work for every hour of direct patient care, so even small automation wins matter.

Big tech is leaning in. Google, OpenAI, and Anthropic are training health-oriented models for professional use, while Amazon and Google recently unveiled tools aimed at scheduling, clinical documentation, and medical coding. Wearable makers are experimenting too; Oura introduced an early women’s health model built on clinical research, and industry chatter suggests Apple is exploring its own health-focused AI features.

For consumers, AI shines at wellness coaching. Ask for a celiac-friendly meal plan, a progressive strength routine, or tips to make CPAP therapy more tolerable, and it can produce organized, personalized suggestions in seconds. As a “conversation starter,” it can also help patients prepare for visits by summarizing symptoms and listing questions to ask — a habit doctors often wish more people had.

The Bad: Why Consumer Chatbots Mislead Patients

Good answers require good inputs — and most people aren’t trained to supply a clinically relevant history. Dr. Mieses Malchuk sees patients arrive with polished chatbot explanations that miss crucial context, like medication doses, timing of symptoms, or family risk. The model sounds sure of itself, and that confidence can be contagious.

A doctor in a white coat talks to a patient with blonde hair who is sitting on an examination table.

Safety data back up her caution. A study in Nature evaluating AI triage found that ChatGPT undertriaged more than 50% of high-acuity scenarios, sometimes advising 24–48-hour follow-up instead of an immediate emergency department visit. The authors flagged inconsistent crisis safeguards and urged prospective validation before broad consumer deployment.

Even when the direction is broadly correct, nuance gets lost. Two problems with similar symptoms — say, indigestion and cardiac ischemia — can look identical to a model that never examined the patient, took vital signs, or reviewed an EKG. That’s why doctors bristle at definitive chatbot language: medicine rarely offers 100% certainty, and premature certainty can be dangerous.

The Ugly: Safety Bias And Accountability Gaps

AI systems learn from historical data, and historical data reflect historical bias. If certain groups were underdiagnosed or undertreated in the past, models can inadvertently echo those patterns. The World Health Organization has warned that bias, security vulnerabilities, and opaque training data can entrench inequities if guardrails are weak.

Privacy is another sore spot. HIPAA protects health information held by covered entities, but many consumer apps and chatbots sit outside that umbrella. Sharing detailed symptoms, images, or identifiers with a general-purpose tool may expose sensitive data in ways patients don’t anticipate. Meanwhile, liability is murky: if a chatbot downplays red flags and harm follows, who is responsible — the developer, the clinic that embedded the tool, or the user who followed advice?

Regulators are trying to catch up. The FDA has cleared numerous AI-enabled tools, especially in imaging, under well-established medical device pathways, and has outlined a risk-based approach for software that influences clinical decisions. But consumer-facing chatbots that offer health guidance without being marketed as medical devices still live in a gray zone.

How To Use AI Wisely: A Doctor’s Playbook

  • Use AI as a springboard, not a replacement. Let it help you organize thoughts, draft wellness plans, and assemble questions for your clinician. Treat outputs as hypotheses to discuss, not conclusions to act on.
  • Bring the summary to your appointment. Doctors can quickly spot missing context, correct inaccuracies, and convert a rough draft into a safe, personalized plan. In Dr. Mieses Malchuk’s clinic, that collaboration saves time and improves shared decision-making.
  • Be cautious with urgent or uncertain symptoms. If something feels serious or rapidly worsening, seek in-person care. No chatbot can examine you, run tests, or assume legal responsibility for missed emergencies.
  • And remember the trust gap cuts both ways. As public confidence in institutions dips and AI’s allure grows, clinicians who use these tools transparently — and explain their limits — can rebuild credibility. That may be AI’s most underrated role in healthcare today: not as the final word, but as a better way to start the conversation.
Pam Belluck
ByPam Belluck
Pam Belluck is a seasoned health and science journalist whose work explores the impact of medicine, policy, and innovation on individuals and society. She has reported extensively on topics like reproductive health, long-term illness, brain science, and public health, with a focus on both complex medical developments and human-centered narratives. Her writing bridges investigative depth with accessible storytelling, often covering issues at the intersection of science, ethics, and personal experience. Pam continues to examine the evolving challenges in health and medicine across global and local contexts.
Latest News
Oracle Cloud ERP Outage Sparks Renewed Debate Over Vendor Lock-In Risks
Why Digital Privacy Has Become a Mainstream Concern for Everyday Users
The Business Case For A Single API Connection In Digital Entertainment
Why Skins and Custom Servers Make Minecraft Bedrock Feel More Alive
Why Server Quality Matters More Than You Think in Minecraft
Smart Protection for Modern Vehicles: A Guide to Extended Warranty Coverage
Making Divorce Easier with the Right Legal Support
What to Know Before Buying New Glasses
8 Key Features to Look for in a Modern Payroll Platform
How to Refinance a Motorcycle Loan
GDC 2026: AviaGames Driving Innovation in Skill-Based Mobile Gaming
Best Dumbbell Sets for Strength Training: An All-Time Buyer’s Guide
FindArticles
  • Contact Us
  • About Us
  • Write For Us
  • Privacy Policy
  • Terms of Service
  • Corrections Policy
  • Diversity & Inclusion Statement
  • Diversity in Our Team
  • Editorial Guidelines
  • Feedback & Editorial Contact Policy
FindArticles © 2025. All Rights Reserved.