TY - GEN
T1 - An empirical study of rich subgroup fairness for machine learning
AU - Kearns, Michael
AU - Roth, Aaron
AU - Neel, Seth
AU - Wu, Zhiwei Steven
N1 - Publisher Copyright:
© 2019 Association for Computing Machinery.
Copyright:
Copyright 2019 Elsevier B.V., All rights reserved.
PY - 2019/1/29
Y1 - 2019/1/29
N2 - Kearns, Neel, Roth, and Wu [ICML 2018] recently proposed a notion of rich subgroup fairness intended to bridge the gap between statistical and individual notions of fairness. Rich subgroup fairness picks a statistical fairness constraint (say, equalizing false positive rates across protected groups), but then asks that this constraint hold over an exponentially or infinitely large collection of subgroups defined by a class of functions with bounded VC dimension. They give an algorithm guaranteed to learn subject to this constraint, under the condition that it has access to oracles for perfectly learning absent a fairness constraint. In this paper, we undertake an extensive empirical evaluation of the algorithm of Kearns et al. On four real datasets for which fairness is a concern, we investigate the basic convergence of the algorithm when instantiated with fast heuristics in place of learning oracles, measure the tradeoffs between fairness and accuracy, and compare this approach with the recent algorithm of Agarwal, Beygelzeimer, Dudik, Langford, and Wallach [ICML 2018], which implements weaker and more traditional marginal fairness constraints defined by individual protected attributes. We find that in general, the Kearns et al. algorithm converges quickly, large gains in fairness can be obtained with mild costs to accuracy, and that optimizing accuracy subject only to marginal fairness leads to classifiers with substantial subgroup unfairness. We also provide a number of analyses and visualizations of the dynamics and behavior of the Kearns et al. algorithm. Overall we find this algorithm to be effective on real data, and rich subgroup fairness to be a viable notion in practice.
AB - Kearns, Neel, Roth, and Wu [ICML 2018] recently proposed a notion of rich subgroup fairness intended to bridge the gap between statistical and individual notions of fairness. Rich subgroup fairness picks a statistical fairness constraint (say, equalizing false positive rates across protected groups), but then asks that this constraint hold over an exponentially or infinitely large collection of subgroups defined by a class of functions with bounded VC dimension. They give an algorithm guaranteed to learn subject to this constraint, under the condition that it has access to oracles for perfectly learning absent a fairness constraint. In this paper, we undertake an extensive empirical evaluation of the algorithm of Kearns et al. On four real datasets for which fairness is a concern, we investigate the basic convergence of the algorithm when instantiated with fast heuristics in place of learning oracles, measure the tradeoffs between fairness and accuracy, and compare this approach with the recent algorithm of Agarwal, Beygelzeimer, Dudik, Langford, and Wallach [ICML 2018], which implements weaker and more traditional marginal fairness constraints defined by individual protected attributes. We find that in general, the Kearns et al. algorithm converges quickly, large gains in fairness can be obtained with mild costs to accuracy, and that optimizing accuracy subject only to marginal fairness leads to classifiers with substantial subgroup unfairness. We also provide a number of analyses and visualizations of the dynamics and behavior of the Kearns et al. algorithm. Overall we find this algorithm to be effective on real data, and rich subgroup fairness to be a viable notion in practice.
KW - Algorithmic Bias
KW - Fair Classification
KW - Fairness Auditing
KW - Subgroup Fairness
UR - http://www.scopus.com/inward/record.url?scp=85061791626&partnerID=8YFLogxK
UR - http://www.scopus.com/inward/citedby.url?scp=85061791626&partnerID=8YFLogxK
U2 - 10.1145/3287560.3287592
DO - 10.1145/3287560.3287592
M3 - Conference contribution
AN - SCOPUS:85061791626
T3 - FAT* 2019 - Proceedings of the 2019 Conference on Fairness, Accountability, and Transparency
SP - 100
EP - 109
BT - FAT* 2019 - Proceedings of the 2019 Conference on Fairness, Accountability, and Transparency
PB - Association for Computing Machinery, Inc
T2 - 2019 ACM Conference on Fairness, Accountability, and Transparency, FAT* 2019
Y2 - 29 January 2019 through 31 January 2019
ER -