Abstract
Kearns, Neel, Roth, and Wu [ICML 2018] recently proposed a notion of rich subgroup fairness intended to bridge the gap between statistical and individual notions of fairness. Rich subgroup fairness picks a statistical fairness constraint (say, equalizing false positive rates across protected groups), but then asks that this constraint hold over an exponentially or infinitely large collection of subgroups defined by a class of functions with bounded VC dimension. They give an algorithm guaranteed to learn subject to this constraint, under the condition that it has access to oracles for perfectly learning absent a fairness constraint. In this paper, we undertake an extensive empirical evaluation of the algorithm of Kearns et al. On four real datasets for which fairness is a concern, we investigate the basic convergence of the algorithm when instantiated with fast heuristics in place of learning oracles, measure the tradeoffs between fairness and accuracy, and compare this approach with the recent algorithm of Agarwal, Beygelzeimer, Dudik, Langford, and Wallach [ICML 2018], which implements weaker and more traditional marginal fairness constraints defined by individual protected attributes. We find that in general, the Kearns et al. algorithm converges quickly, large gains in fairness can be obtained with mild costs to accuracy, and that optimizing accuracy subject only to marginal fairness leads to classifiers with substantial subgroup unfairness. We also provide a number of analyses and visualizations of the dynamics and behavior of the Kearns et al. algorithm. Overall we find this algorithm to be effective on real data, and rich subgroup fairness to be a viable notion in practice.
Original language | English (US) |
---|---|
Title of host publication | FAT* 2019 - Proceedings of the 2019 Conference on Fairness, Accountability, and Transparency |
Publisher | Association for Computing Machinery, Inc |
Pages | 100-109 |
Number of pages | 10 |
ISBN (Electronic) | 9781450361255 |
DOIs | |
State | Published - Jan 29 2019 |
Event | 2019 ACM Conference on Fairness, Accountability, and Transparency, FAT* 2019 - Atlanta, United States Duration: Jan 29 2019 → Jan 31 2019 |
Publication series
Name | FAT* 2019 - Proceedings of the 2019 Conference on Fairness, Accountability, and Transparency |
---|
Conference
Conference | 2019 ACM Conference on Fairness, Accountability, and Transparency, FAT* 2019 |
---|---|
Country/Territory | United States |
City | Atlanta |
Period | 1/29/19 → 1/31/19 |
Bibliographical note
Publisher Copyright:© 2019 Association for Computing Machinery.
Keywords
- Algorithmic Bias
- Fair Classification
- Fairness Auditing
- Subgroup Fairness