Abstract
Understanding how ML models work is a prerequisite for responsibly designing, deploying, and using ML-based systems. With interpretability approaches, ML can now offer explanations for its outputs to aid human understanding. Though these approaches rely on guidelines for how humans explain things to each other, they ultimately solve for improving the artifact - an explanation. In this paper, we propose an alternate framework for interpretability grounded in Weick's sensemaking theory, which focuses on who the explanation is intended for. Recent work has advocated for the importance of understanding stakeholders' needs - we build on this by providing concrete properties (e.g., identity, social context, environmental cues, etc.) that shape human understanding. We use an application of sensemaking in organizations as a template for discussing design guidelines for sensible AI, AI that factors in the nuances of human cognition when trying to explain itself.
| Original language | English (US) |
|---|---|
| Title of host publication | Proceedings of 2022 5th ACM Conference on Fairness, Accountability, and Transparency, FAccT 2022 |
| Publisher | Association for Computing Machinery |
| Pages | 702-714 |
| Number of pages | 13 |
| ISBN (Electronic) | 9781450393522 |
| DOIs | |
| State | Published - Jun 21 2022 |
| Externally published | Yes |
| Event | 5th ACM Conference on Fairness, Accountability, and Transparency, FAccT 2022 - Virtual, Online, Korea, Republic of Duration: Jun 21 2022 → Jun 24 2022 |
Publication series
| Name | ACM International Conference Proceeding Series |
|---|
Conference
| Conference | 5th ACM Conference on Fairness, Accountability, and Transparency, FAccT 2022 |
|---|---|
| Country/Territory | Korea, Republic of |
| City | Virtual, Online |
| Period | 6/21/22 → 6/24/22 |
Bibliographical note
Publisher Copyright:© 2022 ACM.
Keywords
- explainability
- interpretability
- organizations
- sensemaking