An empirical study of rich subgroup fairness for machine learning

Michael Kearns, Aaron Roth, Seth Neel, Zhiwei Steven Wu

Research output: Chapter in Book/Report/Conference proceedingConference contribution

55 Scopus citations

Abstract

Kearns, Neel, Roth, and Wu [ICML 2018] recently proposed a notion of rich subgroup fairness intended to bridge the gap between statistical and individual notions of fairness. Rich subgroup fairness picks a statistical fairness constraint (say, equalizing false positive rates across protected groups), but then asks that this constraint hold over an exponentially or infinitely large collection of subgroups defined by a class of functions with bounded VC dimension. They give an algorithm guaranteed to learn subject to this constraint, under the condition that it has access to oracles for perfectly learning absent a fairness constraint. In this paper, we undertake an extensive empirical evaluation of the algorithm of Kearns et al. On four real datasets for which fairness is a concern, we investigate the basic convergence of the algorithm when instantiated with fast heuristics in place of learning oracles, measure the tradeoffs between fairness and accuracy, and compare this approach with the recent algorithm of Agarwal, Beygelzeimer, Dudik, Langford, and Wallach [ICML 2018], which implements weaker and more traditional marginal fairness constraints defined by individual protected attributes. We find that in general, the Kearns et al. algorithm converges quickly, large gains in fairness can be obtained with mild costs to accuracy, and that optimizing accuracy subject only to marginal fairness leads to classifiers with substantial subgroup unfairness. We also provide a number of analyses and visualizations of the dynamics and behavior of the Kearns et al. algorithm. Overall we find this algorithm to be effective on real data, and rich subgroup fairness to be a viable notion in practice.

Original languageEnglish (US)
Title of host publicationFAT* 2019 - Proceedings of the 2019 Conference on Fairness, Accountability, and Transparency
PublisherAssociation for Computing Machinery, Inc
Pages100-109
Number of pages10
ISBN (Electronic)9781450361255
DOIs
StatePublished - Jan 29 2019
Event2019 ACM Conference on Fairness, Accountability, and Transparency, FAT* 2019 - Atlanta, United States
Duration: Jan 29 2019Jan 31 2019

Publication series

NameFAT* 2019 - Proceedings of the 2019 Conference on Fairness, Accountability, and Transparency

Conference

Conference2019 ACM Conference on Fairness, Accountability, and Transparency, FAT* 2019
Country/TerritoryUnited States
CityAtlanta
Period1/29/191/31/19

Bibliographical note

Publisher Copyright:
© 2019 Association for Computing Machinery.

Keywords

  • Algorithmic Bias
  • Fair Classification
  • Fairness Auditing
  • Subgroup Fairness

Fingerprint

Dive into the research topics of 'An empirical study of rich subgroup fairness for machine learning'. Together they form a unique fingerprint.

Cite this