A taxonomy for advancing systematic error analysis in multi-site electronic health record-based clinical concept extraction

Sunyang Fu, Liwei Wang, Huan He, Andrew Wen, Nansu Zong, Anamika Kumari, Feifan Liu, Sicheng Zhou, Rui Zhang, Chenyu Li, Yanshan Wang, Jennifer St Sauver, Hongfang Liu, Sunghwan Sohn

Research output: Contribution to journalArticlepeer-review

1 Scopus citations

Abstract

Background: Error analysis plays a crucial role in clinical concept extraction, a fundamental subtask within clinical natural language processing (NLP). The process typically involves a manual review of error types, such as contextual and linguistic factors contributing to their occurrence, and the identification of underlying causes to refine the NLP model and improve its performance. Conducting error analysis can be complex, requiring a combination of NLP expertise and domain-specific knowledge. Due to the high heterogeneity of electronic health record (EHR) settings across different institutions, challenges may arise when attempting to standardize and reproduce the error analysis process. Objectives: This study aims to facilitate a collaborative effort to establish common definitions and taxonomies for capturing diverse error types, fostering community consensus on error analysis for clinical concept extraction tasks. Materials and Methods: We iteratively developed and evaluated an error taxonomy based on existing literature, standards, real-world data, multisite case evaluations, and community feedback. The finalized taxonomy was released in both. dtd and. owl formats at the Open Health Natural Language Processing Consortium. The taxonomy is compatible with several different open-source annotation tools, including MAE, Brat, and MedTator. Results: The resulting error taxonomy comprises 43 distinct error classes, organized into 6 error dimensions and 4 properties, including model type (symbolic and statistical machine learning), evaluation subject (model and human), evaluation level (patient, document, sentence, and concept), and annotation examples. Internal and external evaluations revealed strong variations in error types across methodological approaches, tasks, and EHR settings. Key points emerged from community feedback, including the need to enhancing clarity, generalizability, and usability of the taxonomy, along with dissemination strategies. Conclusion: The proposed taxonomy can facilitate the acceleration and standardization of the error analysis process in multi-site settings, thus improving the provenance, interpretability, and portability of NLP models. Future researchers could explore the potential direction of developing automated or semi-automated methods to assist in the classification and standardization of error analysis.

Original languageEnglish (US)
Pages (from-to)1493-1502
Number of pages10
JournalJournal of the American Medical Informatics Association
Volume31
Issue number7
DOIs
StatePublished - Jul 1 2024

Bibliographical note

Publisher Copyright:
© 2024 The Author(s). Published by Oxford University Press on behalf of the American Medical Informatics Association. All rights reserved.

Keywords

  • electronic health record
  • error analysis
  • natural language processing

PubMed: MeSH publication types

  • Journal Article

Fingerprint

Dive into the research topics of 'A taxonomy for advancing systematic error analysis in multi-site electronic health record-based clinical concept extraction'. Together they form a unique fingerprint.

Cite this