TY - JOUR
T1 - Retiring ∆DP
T2 - New Distribution-Level Metrics for Demographic Parity
AU - Han, Xiaotian
AU - Jiang, Zhimeng
AU - Jin, Hongye
AU - Liu, Zirui
AU - Zou, Na
AU - Wang, Qifan
AU - Hu, Xia
N1 - Publisher Copyright:
© 2023, Transactions on Machine Learning Research. All rights reserved.
PY - 2023/5/1
Y1 - 2023/5/1
N2 - Demographic parity is the most widely recognized measure of group fairness in machine learn-ing, which ensures equal treatment of different demographic groups. Numerous works aim to achieve demographic parity by pursuing the commonly used metric ∆DP1. Unfortunately, in this paper, we reveal that the fairness metric ∆DP can not precisely measure the violation of demographic parity, because it inherently has the following drawbacks: i) zero-value ∆DP does not guarantee zero violation of demographic parity, ii) ∆DP values can vary with different classification thresholds. To this end, we propose two new fairness metrics, Area Between Probability density function Curves (ABPC) and Area Between Cumulative density function Curves (ABCC), to precisely measure the violation of demographic parity at the distribution level. The new fairness metrics directly measure the difference between the distributions of the prediction probability for different demographic groups. Thus our proposed new metrics enjoy: i) zero-value ABCC/ABPC guarantees zero violation of demographic parity; ii) ABCC/ABPC guarantees demographic parity while the classification thresholds are adjusted. We further re-evaluate the existing fair models with our proposed fairness metrics and observe different fairness behaviors of those models under the new metrics. The code is available at https://github.com/ahxt/new_metric_for_demographic_parity.
AB - Demographic parity is the most widely recognized measure of group fairness in machine learn-ing, which ensures equal treatment of different demographic groups. Numerous works aim to achieve demographic parity by pursuing the commonly used metric ∆DP1. Unfortunately, in this paper, we reveal that the fairness metric ∆DP can not precisely measure the violation of demographic parity, because it inherently has the following drawbacks: i) zero-value ∆DP does not guarantee zero violation of demographic parity, ii) ∆DP values can vary with different classification thresholds. To this end, we propose two new fairness metrics, Area Between Probability density function Curves (ABPC) and Area Between Cumulative density function Curves (ABCC), to precisely measure the violation of demographic parity at the distribution level. The new fairness metrics directly measure the difference between the distributions of the prediction probability for different demographic groups. Thus our proposed new metrics enjoy: i) zero-value ABCC/ABPC guarantees zero violation of demographic parity; ii) ABCC/ABPC guarantees demographic parity while the classification thresholds are adjusted. We further re-evaluate the existing fair models with our proposed fairness metrics and observe different fairness behaviors of those models under the new metrics. The code is available at https://github.com/ahxt/new_metric_for_demographic_parity.
UR - http://www.scopus.com/inward/record.url?scp=86000084921&partnerID=8YFLogxK
UR - http://www.scopus.com/inward/citedby.url?scp=86000084921&partnerID=8YFLogxK
M3 - Article
AN - SCOPUS:86000084921
SN - 2835-8856
VL - 2023
JO - Transactions on Machine Learning Research
JF - Transactions on Machine Learning Research
ER -