Abstract
Along with the increasing availability of health data has come the rise of data-driven models to inform decision making and policy. These models have the potential to benefit both patients and health care providers but can also exacerbate health inequities. Existing "algorithmic fairness"methods for measuring and correcting model bias fall short of what is needed for health policy in two key ways. First, methods typically focus on a single grouping along which discrimination may occur rather than considering multiple, intersecting groups. Second, in clinical applications, risk prediction is typically used to guide treatment, creating distinct statistical issues that invalidate most existing techniques. We present novel unfairness metrics that address both challenges. We also develop a complete framework of estimation and inference tools for our metrics, including the unfairness value ("u-value"), used to determine the relative extremity of unfairness, and standard errors and confidence intervals employing an alternative to the standard bootstrap. We demonstrate application of our framework to a COVID-19 risk prediction model deployed in a major Midwestern health system.
Original language | English (US) |
---|---|
Pages (from-to) | 702-717 |
Number of pages | 16 |
Journal | Biostatistics |
Volume | 25 |
Issue number | 3 |
DOIs | |
State | Published - Jul 1 2024 |
Bibliographical note
Publisher Copyright:© 2023 The Author(s). Published by Oxford University Press. All rights reserved.
Keywords
- Algorithmic fairness
- Causal inference
- COVID-19
- Intersectionality
- Risk prediction
PubMed: MeSH publication types
- Journal Article