TY - GEN
T1 - Fair regression
T2 - 36th International Conference on Machine Learning, ICML 2019
AU - Agarwal, Alekh
AU - Dudík, Miroslav
AU - Wu, Zhiwei Steven
PY - 2019/1/1
Y1 - 2019/1/1
N2 - In this paper, we study the prediction of a real-valued target, such as a risk score or recidivism rate, while guaranteeing a quantitative notion of fairness with respect to a protected attribute such as gender or race. We call this class of problems fair regression. We propose general schemes for fair regression under two notions of fairness: (1) statistical parity, which asks that the prediction be statistically independent of the protected attribute, and (2) bounded group loss, which asks that the prediction error restricted to any protected group remain below some pre-determined level. While we only study these two notions of fairness, our schemes are applicable to arbitrary Lipschitzcontinuous losses, and so they encompass least-squares regression, logistic regression, quantile regression, and many other tasks. Our schemes only require access to standard risk minimization algorithms (such as standard classification or least-squares regression) while providing theoretical guarantees on the optimality and fairness of the obtained solutions. In addition to analyzing theoretical properties of our schemes, we empirically demonstrate their ability to uncover fairness-accuracy frontiers on several standard datasets.
AB - In this paper, we study the prediction of a real-valued target, such as a risk score or recidivism rate, while guaranteeing a quantitative notion of fairness with respect to a protected attribute such as gender or race. We call this class of problems fair regression. We propose general schemes for fair regression under two notions of fairness: (1) statistical parity, which asks that the prediction be statistically independent of the protected attribute, and (2) bounded group loss, which asks that the prediction error restricted to any protected group remain below some pre-determined level. While we only study these two notions of fairness, our schemes are applicable to arbitrary Lipschitzcontinuous losses, and so they encompass least-squares regression, logistic regression, quantile regression, and many other tasks. Our schemes only require access to standard risk minimization algorithms (such as standard classification or least-squares regression) while providing theoretical guarantees on the optimality and fairness of the obtained solutions. In addition to analyzing theoretical properties of our schemes, we empirically demonstrate their ability to uncover fairness-accuracy frontiers on several standard datasets.
UR - http://www.scopus.com/inward/record.url?scp=85078571986&partnerID=8YFLogxK
UR - http://www.scopus.com/inward/citedby.url?scp=85078571986&partnerID=8YFLogxK
M3 - Conference contribution
AN - SCOPUS:85078571986
T3 - 36th International Conference on Machine Learning, ICML 2019
SP - 166
EP - 183
BT - 36th International Conference on Machine Learning, ICML 2019
PB - International Machine Learning Society (IMLS)
Y2 - 9 June 2019 through 15 June 2019
ER -