Explaining decision-making algorithms through UI: Strategies to help non-expert stakeholders

Hao Fei Cheng, Ruotong Wang, Zheng Zhang, Fiona O'Connell, Terrance Gray, Max Harper, Haiyi Zhu

Research output: Chapter in Book/Report/Conference proceedingConference contribution

Abstract

Increasingly, algorithms are used to make important decisions across society. However, these algorithms are usually poorly understood, which can reduce transparency and evoke negative emotions. In this research, we seek to learn design principles for explanation interfaces that communicate how decision-making algorithms work, in order to help organizations explain their decisions to stakeholders, or to support users' "right to explanation". We conducted an online experiment where 199 participants used different explanation interfaces to understand an algorithm for making university admissions decisions. We measured users' objective and self-reported understanding of the algorithm. Our results show that both interactive explanations and "white-box" explanations (i.e. that show the inner workings of an algorithm) can improve users' comprehension. Although the interactive approach is more effective at improving comprehension, it comes with a trade-off of taking more time. Surprisingly, we also find that users' trust in algorithmic decisions is not affected by the explanation interface or their level of comprehension of the algorithm.

Original languageEnglish (US)
Title of host publicationCHI 2019 - Proceedings of the 2019 CHI Conference on Human Factors in Computing Systems
PublisherAssociation for Computing Machinery
ISBN (Electronic)9781450359702
DOIs
StatePublished - May 2 2019
Event2019 CHI Conference on Human Factors in Computing Systems, CHI 2019 - Glasgow, United Kingdom
Duration: May 4 2019May 9 2019

Publication series

NameConference on Human Factors in Computing Systems - Proceedings

Conference

Conference2019 CHI Conference on Human Factors in Computing Systems, CHI 2019
CountryUnited Kingdom
CityGlasgow
Period5/4/195/9/19

Fingerprint

Decision making
Transparency
Experiments

Keywords

  • Algorithmic decision-making
  • Explanation interfaces

Cite this

Cheng, H. F., Wang, R., Zhang, Z., O'Connell, F., Gray, T., Harper, M., & Zhu, H. (2019). Explaining decision-making algorithms through UI: Strategies to help non-expert stakeholders. In CHI 2019 - Proceedings of the 2019 CHI Conference on Human Factors in Computing Systems (Conference on Human Factors in Computing Systems - Proceedings). Association for Computing Machinery. https://doi.org/10.1145/3290605.3300789

Explaining decision-making algorithms through UI : Strategies to help non-expert stakeholders. / Cheng, Hao Fei; Wang, Ruotong; Zhang, Zheng; O'Connell, Fiona; Gray, Terrance; Harper, Max; Zhu, Haiyi.

CHI 2019 - Proceedings of the 2019 CHI Conference on Human Factors in Computing Systems. Association for Computing Machinery, 2019. (Conference on Human Factors in Computing Systems - Proceedings).

Research output: Chapter in Book/Report/Conference proceedingConference contribution

Cheng, HF, Wang, R, Zhang, Z, O'Connell, F, Gray, T, Harper, M & Zhu, H 2019, Explaining decision-making algorithms through UI: Strategies to help non-expert stakeholders. in CHI 2019 - Proceedings of the 2019 CHI Conference on Human Factors in Computing Systems. Conference on Human Factors in Computing Systems - Proceedings, Association for Computing Machinery, 2019 CHI Conference on Human Factors in Computing Systems, CHI 2019, Glasgow, United Kingdom, 5/4/19. https://doi.org/10.1145/3290605.3300789
Cheng HF, Wang R, Zhang Z, O'Connell F, Gray T, Harper M et al. Explaining decision-making algorithms through UI: Strategies to help non-expert stakeholders. In CHI 2019 - Proceedings of the 2019 CHI Conference on Human Factors in Computing Systems. Association for Computing Machinery. 2019. (Conference on Human Factors in Computing Systems - Proceedings). https://doi.org/10.1145/3290605.3300789
Cheng, Hao Fei ; Wang, Ruotong ; Zhang, Zheng ; O'Connell, Fiona ; Gray, Terrance ; Harper, Max ; Zhu, Haiyi. / Explaining decision-making algorithms through UI : Strategies to help non-expert stakeholders. CHI 2019 - Proceedings of the 2019 CHI Conference on Human Factors in Computing Systems. Association for Computing Machinery, 2019. (Conference on Human Factors in Computing Systems - Proceedings).
@inproceedings{24875fdb0d3f4b0786a555d03b7c9ed4,
title = "Explaining decision-making algorithms through UI: Strategies to help non-expert stakeholders",
abstract = "Increasingly, algorithms are used to make important decisions across society. However, these algorithms are usually poorly understood, which can reduce transparency and evoke negative emotions. In this research, we seek to learn design principles for explanation interfaces that communicate how decision-making algorithms work, in order to help organizations explain their decisions to stakeholders, or to support users' {"}right to explanation{"}. We conducted an online experiment where 199 participants used different explanation interfaces to understand an algorithm for making university admissions decisions. We measured users' objective and self-reported understanding of the algorithm. Our results show that both interactive explanations and {"}white-box{"} explanations (i.e. that show the inner workings of an algorithm) can improve users' comprehension. Although the interactive approach is more effective at improving comprehension, it comes with a trade-off of taking more time. Surprisingly, we also find that users' trust in algorithmic decisions is not affected by the explanation interface or their level of comprehension of the algorithm.",
keywords = "Algorithmic decision-making, Explanation interfaces",
author = "Cheng, {Hao Fei} and Ruotong Wang and Zheng Zhang and Fiona O'Connell and Terrance Gray and Max Harper and Haiyi Zhu",
year = "2019",
month = "5",
day = "2",
doi = "10.1145/3290605.3300789",
language = "English (US)",
series = "Conference on Human Factors in Computing Systems - Proceedings",
publisher = "Association for Computing Machinery",
booktitle = "CHI 2019 - Proceedings of the 2019 CHI Conference on Human Factors in Computing Systems",

}

TY - GEN

T1 - Explaining decision-making algorithms through UI

T2 - Strategies to help non-expert stakeholders

AU - Cheng, Hao Fei

AU - Wang, Ruotong

AU - Zhang, Zheng

AU - O'Connell, Fiona

AU - Gray, Terrance

AU - Harper, Max

AU - Zhu, Haiyi

PY - 2019/5/2

Y1 - 2019/5/2

N2 - Increasingly, algorithms are used to make important decisions across society. However, these algorithms are usually poorly understood, which can reduce transparency and evoke negative emotions. In this research, we seek to learn design principles for explanation interfaces that communicate how decision-making algorithms work, in order to help organizations explain their decisions to stakeholders, or to support users' "right to explanation". We conducted an online experiment where 199 participants used different explanation interfaces to understand an algorithm for making university admissions decisions. We measured users' objective and self-reported understanding of the algorithm. Our results show that both interactive explanations and "white-box" explanations (i.e. that show the inner workings of an algorithm) can improve users' comprehension. Although the interactive approach is more effective at improving comprehension, it comes with a trade-off of taking more time. Surprisingly, we also find that users' trust in algorithmic decisions is not affected by the explanation interface or their level of comprehension of the algorithm.

AB - Increasingly, algorithms are used to make important decisions across society. However, these algorithms are usually poorly understood, which can reduce transparency and evoke negative emotions. In this research, we seek to learn design principles for explanation interfaces that communicate how decision-making algorithms work, in order to help organizations explain their decisions to stakeholders, or to support users' "right to explanation". We conducted an online experiment where 199 participants used different explanation interfaces to understand an algorithm for making university admissions decisions. We measured users' objective and self-reported understanding of the algorithm. Our results show that both interactive explanations and "white-box" explanations (i.e. that show the inner workings of an algorithm) can improve users' comprehension. Although the interactive approach is more effective at improving comprehension, it comes with a trade-off of taking more time. Surprisingly, we also find that users' trust in algorithmic decisions is not affected by the explanation interface or their level of comprehension of the algorithm.

KW - Algorithmic decision-making

KW - Explanation interfaces

UR - http://www.scopus.com/inward/record.url?scp=85067624181&partnerID=8YFLogxK

UR - http://www.scopus.com/inward/citedby.url?scp=85067624181&partnerID=8YFLogxK

U2 - 10.1145/3290605.3300789

DO - 10.1145/3290605.3300789

M3 - Conference contribution

AN - SCOPUS:85067624181

T3 - Conference on Human Factors in Computing Systems - Proceedings

BT - CHI 2019 - Proceedings of the 2019 CHI Conference on Human Factors in Computing Systems

PB - Association for Computing Machinery

ER -