Increasingly, algorithms are used to make important decisions across society. However, these algorithms are usually poorly understood, which can reduce transparency and evoke negative emotions. In this research, we seek to learn design principles for explanation interfaces that communicate how decision-making algorithms work, in order to help organizations explain their decisions to stakeholders, or to support users' "right to explanation". We conducted an online experiment where 199 participants used different explanation interfaces to understand an algorithm for making university admissions decisions. We measured users' objective and self-reported understanding of the algorithm. Our results show that both interactive explanations and "white-box" explanations (i.e. that show the inner workings of an algorithm) can improve users' comprehension. Although the interactive approach is more effective at improving comprehension, it comes with a trade-off of taking more time. Surprisingly, we also find that users' trust in algorithmic decisions is not affected by the explanation interface or their level of comprehension of the algorithm.
|Original language||English (US)|
|Title of host publication||CHI 2019 - Proceedings of the 2019 CHI Conference on Human Factors in Computing Systems|
|Publisher||Association for Computing Machinery|
|State||Published - May 2 2019|
|Event||2019 CHI Conference on Human Factors in Computing Systems, CHI 2019 - Glasgow, United Kingdom|
Duration: May 4 2019 → May 9 2019
|Name||Conference on Human Factors in Computing Systems - Proceedings|
|Conference||2019 CHI Conference on Human Factors in Computing Systems, CHI 2019|
|Period||5/4/19 → 5/9/19|
Bibliographical notePublisher Copyright:
© 2019 Copyright held by the owner/author(s). Publication rights licensed to ACM.
Copyright 2019 Elsevier B.V., All rights reserved.
- Algorithmic decision-making
- Explanation interfaces