Interpreting Interpretability: Understanding Data Scientists' Use of Interpretability Tools for Machine Learning

Harmanpreet Kaur, Harsha Nori, Samuel Jenkins, Rich Caruana, Hanna Wallach, Jennifer Wortman Vaughan

Research output: Chapter in Book/Report/Conference proceedingConference contribution

261 Scopus citations

Abstract

Machine learning (ML) models are now routinely deployed in domains ranging from criminal justice to healthcare. With this newfound ubiquity, ML has moved beyond academia and grown into an engineering discipline. To that end, interpretability tools have been designed to help data scientists and machine learning practitioners better understand how ML models work. However, there has been little evaluation of the extent to which these tools achieve this goal. We study data scientists' use of two existing interpretability tools, the InterpretML implementation of GAMs and the SHAP Python package. We conduct a contextual inquiry (N=11) and a survey (N=197) of data scientists to observe how they use interpretability tools to uncover common issues that arise when building and evaluating ML models. Our results indicate that data scientists over-trust and misuse interpretability tools. Furthermore, few of our participants were able to accurately describe the visualizations output by these tools. We highlight qualitative themes for data scientists' mental models of interpretability tools. We conclude with implications for researchers and tool designers, and contextualize our findings in the social science literature.

Original languageEnglish (US)
Title of host publicationCHI 2020 - Proceedings of the 2020 CHI Conference on Human Factors in Computing Systems
PublisherAssociation for Computing Machinery
ISBN (Electronic)9781450367080
DOIs
StatePublished - Apr 21 2020
Externally publishedYes
Event2020 ACM CHI Conference on Human Factors in Computing Systems, CHI 2020 - Honolulu, United States
Duration: Apr 25 2020Apr 30 2020

Publication series

NameConference on Human Factors in Computing Systems - Proceedings

Conference

Conference2020 ACM CHI Conference on Human Factors in Computing Systems, CHI 2020
Country/TerritoryUnited States
CityHonolulu
Period4/25/204/30/20

Bibliographical note

Publisher Copyright:
© 2020 ACM.

Keywords

  • interpretability
  • machine learning
  • user-centric evaluation

Fingerprint

Dive into the research topics of 'Interpreting Interpretability: Understanding Data Scientists' Use of Interpretability Tools for Machine Learning'. Together they form a unique fingerprint.

Cite this