A method for comparing multiple imputation techniques: A case study on the U.S. national COVID cohort collaborative

Elena Casiraghi, Rachel Wong, Margaret Hall, Ben Coleman, Marco Notaro, Michael D. Evans, Jena S. Tronieri, Hannah Blau, Bryan Laraway, Tiffany J. Callahan, Lauren E. Chan, Carolyn T. Bramante, John B. Buse, Richard A. Moffitt, Til Stürmer, Steven G. Johnson, Yu Raymond Shao, Justin Reese, Peter N. Robinson, Alberto PaccanaroGiorgio Valentini, Jared D. Huling, Kenneth J. Wilkins

Research output: Contribution to journalArticlepeer-review

4 Scopus citations

Abstract

Healthcare datasets obtained from Electronic Health Records have proven to be extremely useful for assessing associations between patients’ predictors and outcomes of interest. However, these datasets often suffer from missing values in a high proportion of cases, whose removal may introduce severe bias. Several multiple imputation algorithms have been proposed to attempt to recover the missing information under an assumed missingness mechanism. Each algorithm presents strengths and weaknesses, and there is currently no consensus on which multiple imputation algorithm works best in a given scenario. Furthermore, the selection of each algorithm's parameters and data-related modeling choices are also both crucial and challenging. In this paper we propose a novel framework to numerically evaluate strategies for handling missing data in the context of statistical analysis, with a particular focus on multiple imputation techniques. We demonstrate the feasibility of our approach on a large cohort of type-2 diabetes patients provided by the National COVID Cohort Collaborative (N3C) Enclave, where we explored the influence of various patient characteristics on outcomes related to COVID-19. Our analysis included classic multiple imputation techniques as well as simple complete-case Inverse Probability Weighted models. Extensive experiments show that our approach can effectively highlight the most promising and performant missing-data handling strategy for our case study. Moreover, our methodology allowed a better understanding of the behavior of the different models and of how it changed as we modified their parameters. Our method is general and can be applied to different research fields and on datasets containing heterogeneous types.

Original languageEnglish (US)
Article number104295
JournalJournal of Biomedical Informatics
Volume139
DOIs
StatePublished - Mar 2023

Bibliographical note

Funding Information:
Elena Casiraghi, Marco Notaro, and Giorgio Valentini were supported by Università degli Studi di Milano, Piano di sviluppo di ricerca, grant number 2015-17 PSR2015-17.

Funding Information:
Alberto Paccanaro was supported by Biotechnology and Biological Sciences Research Council (https:// bbsrc.ukri.org/) grants numbers BB/K004131/1, BB/F00964X/1 and BB/M025047/1, Medical Research Council ( https://mrc.ukri.org ) grant number MR/T001070/1, Consejo Nacional de Ciencia y Tecnología Paraguay ( https://www.conacyt.gov.py/ ) grants numbers 14-INV-088, PINV15–315 and PINV20-337, National Science Foundation Advances in Bio Informatics ( https://www.nsf.gov/ ) grant number 1660648, Fundação de Amparo à Pesquisa do Estado do Rio de Janeiro grant number E-26/201.079/2021 (260380) and Fundação Getulio Vargas.

Publisher Copyright:
© 2023

Keywords

  • COVID-19 severity assessment
  • Clinical informatics
  • Diabetic patients
  • Evaluation framework
  • Multiple Imputation

Fingerprint

Dive into the research topics of 'A method for comparing multiple imputation techniques: A case study on the U.S. national COVID cohort collaborative'. Together they form a unique fingerprint.

Cite this