Uncertainty-aware large language models for explainable disease diagnosis

Research output: Contribution to journalArticlepeer-review

Abstract

Explainable disease diagnosis, which leverages patient information (e.g., symptoms) and computational models to generate probable diagnoses and reasoning, holds strong clinical promise. Yet, when clinical notes lack sufficient evidence for a definitive diagnosis, such as the absence of definitive symptoms, diagnostic uncertainty commonly arises, increasing the risk of misdiagnosis. Despite its importance, the explicit identification and explanation of diagnostic uncertainty remain under-explored in artificial intelligence-driven systems. To fill this gap, we introduce ConfiDx, an uncertainty-aware large language model fine-tuned with diagnostic criteria. We formalized the task of uncertainty-aware diagnosis and curated richly annotated datasets that reflect varying degrees of diagnostic ambiguity. Evaluating on real-world datasets demonstrated that ConfiDx excelled in identifying diagnostic uncertainties, achieving superior diagnostic performance, and generating trustworthy explanations for diagnoses and uncertainties. Moreover, ConfiDx-assisted experts outperformed standalone experts by 10.7% in uncertainty recognition and 26% in uncertainty explanation, underscoring its substantial potential to improve clinical decision-making.

Original languageEnglish (US)
Article number690
Journalnpj Digital Medicine
Volume8
Issue number1
DOIs
StatePublished - Dec 2025

Bibliographical note

Publisher Copyright:
© The Author(s) 2025.

PubMed: MeSH publication types

  • Journal Article

Fingerprint

Dive into the research topics of 'Uncertainty-aware large language models for explainable disease diagnosis'. Together they form a unique fingerprint.

Cite this