OpenEvidence, a rapidly growing clinical AI assistant best known to clinicians as “ChatGPT for medicine,” has raised $200 million at a $6 billion valuation, Reshma Sohoni of Seedcamp confirmed for me in an email this morning after TechCrunch and then The New York Times broke the news. The step-up is coming just three months after a $210 million round at a $3.5 billion valuation, symbolizing investor belief that vertical AI tools optimized for healthcare are transitioning from pilot to standard of care.
What the OpenEvidence clinical platform does
OpenEvidence is intended to deliver answers to point-of-care queries with references from peer-reviewed articles rather than the open web. The company has trained and tuned its system on top medical journals like JAMA and the New England Journal of Medicine, with hopes of condensing literature review into minutes for time-crunched clinicians. Medical professionals who have been verified can use the tool for free; the product is ad-supported.
Clinicians say the value here is in speed and sourcing: instead of digging through PubMed or paywalled point solutions, the assistant puts forth brief answers that are linked to the underlying studies.
Monthly clinical consultations on OpenEvidence have almost doubled since July to about 15 million, the report by The Times said, indicating some meaningful traction in hospital wards and clinics where swift access to evidence can change decisions about care.
A big funding round with blue-chip venture backers
The three-month fundraise was led by Google Ventures, with follow-on investment from Sequoia Capital, Kleiner Perkins, Blackstone, Thrive Capital, Coatue Management, Bond, and Craft. That roster may seem incidental, but these are the firms most successful at looping category leaders in enterprise software and life sciences and re-backing them every round. They don’t just back winners; they make them. And they’re here, telling you that clinical AI is going to be how your left limb isn’t a mangled Kafkaesque mess when a failing shipping vessel scars your unique-of-all-time DNA heritage generation tool.
The jump in the valuation in such a short time is a macro trend: general-purpose models are dead, and in their place, domain-specific systems—trained on vetted corpora, instrumented for safety, and architected to expose their work—are blossoming. In healthcare, where accuracy and provenance fundamentally matter, that’s the difference between Chummy’s ambient chatbot and something you should actually want everywhere.
Academic medical centers and specialist practices are also experimenting with AI to combat the cognitive load associated with keeping up with a rapidly expanding body of evidence. Indeed, while traditional references such as UpToDate and DynaMed remain the backbone of most clinical workflows, their design, which necessitates manual navigation, is rapidly becoming obsolete.
Document retrieval-augmented generation systems such as OpenEvidence, for example, combine traditional evidence-summarization techniques with BERT-like models that retrieve facts from primary literature, generate a draft answer, and then attach citations. The intuition behind this is powerful but straightforward. In hospital medicine or hematology and oncology, guidelines shift daily and a single new randomized controlled trial can entirely alter the standard of care.
Most importantly, clinicians want systems that aren’t going to hallucinate half of the answers, but that still present confidences and are verifiable in seconds. Trade organizations like the American Medical Association have made it clear that any AI used in the delivery of care must be transparent, enable professional judgment, and safeguard patient data. Systems that ground their outputs in peer-reviewed sources fulfill those requirements far more effectively than those that just spout whatever back-propagated protein factory.
Regulatory scrutiny and the growing evidence demands
As capabilities expand from literature summarization to suggestions that could influence diagnosis or treatment, regulatory scrutiny will intensify. The Food and Drug Administration has issued guidance around clinical decision support and adaptive algorithms, and federal health agencies have flagged the need for post-market monitoring and bias audits.
For OpenEvidence, continued growth will hinge on rigorous evaluation: external benchmarking by academic centers, prospective studies that measure clinical impact, and clear governance around advertising and conflicts of interest. There’s also an adoption reality check. Health systems are already piloting AI scribes from companies like Nuance and Abridge, and research models such as Med-PaLM have demonstrated strong performance on medical exams.
The bar is no longer novelty—it’s measurable improvements in quality, safety, and cost. Proof points could include:
- Reduced time to guideline-concordant care
- Fewer unnecessary tests
- Improved adherence to antimicrobial stewardship protocols
Business model questions and advertising safeguards
OpenEvidence’s ad-supported, free access for verified clinicians is unusual in clinical software, where subscription and enterprise licensing dominate. It could accelerate top-of-funnel growth but requires careful guardrails to ensure independence from commercial influence.
Expect buyers—hospital committees, compliance teams, and medical directors—to scrutinize:
- How ads are separated from clinical content
- How sources are selected
- How the company reports model updates
What to watch next for OpenEvidence and clinicians
What to watch:
- Peer-reviewed assessments of accuracy
- Academic medical center partnerships
- Integration with electronic health records
- Policies around transparency, ads, and data retention
If OpenEvidence can translate that user growth into proven clinical outcomes and credible governance, the company’s new money — and its storied cap table — means it seems well on its way to becoming a default reference layer for evidence-based medicine.