Abstract
The results of many machine learning (ML) algorithms often yield very complicated black box models. While these models can have superb accuracy, their challenge lies in articulating the rationale behind specific outputs for given inputs, making verification and trust-building problematic in numerous applications or scenarios. In response to this challenge, ML researchers have started to devise methods that tackle this issue, creating a relatively heterogeneous set of tools which are often covered under the umbrella term of eXplainable AI, and abbreviated as XAI. Historically, system identification researchers have also faced similar problems with estimating meaningful parameters in order to obtain more interpretable models. Nevertheless, results from these fields currently seem to draw little from each other. This part of the tutorial, tailored as an introduction, aims at providing system identification researchers with a foundational overview of the motivations behind eXplainable AI (XAI) and the main connections with their field.
Original language | English (US) |
---|---|
Pages (from-to) | 502-507 |
Number of pages | 6 |
Journal | IFAC-PapersOnLine |
Volume | 58 |
Issue number | 15 |
DOIs | |
State | Published - Jul 1 2024 |
Event | 20th IFAC Symposium on System Identification, SYSID 2024 - Boston, United States Duration: Jul 17 2024 → Jul 19 2024 |
Bibliographical note
Publisher Copyright:© 2024 The Authors.
Keywords
- eXplainable AI