Explainable AI: motivations and connections with system identification

Sean Warnick, Donatello Materassi, Krishnamurthy Vemuru, Farrokh Vatan, Pier Luigi Petrillo, Amidu Kamara, Brian Henz

Research output: Contribution to journalConference articlepeer-review

Abstract

The results of many machine learning (ML) algorithms often yield very complicated black box models. While these models can have superb accuracy, their challenge lies in articulating the rationale behind specific outputs for given inputs, making verification and trust-building problematic in numerous applications or scenarios. In response to this challenge, ML researchers have started to devise methods that tackle this issue, creating a relatively heterogeneous set of tools which are often covered under the umbrella term of eXplainable AI, and abbreviated as XAI. Historically, system identification researchers have also faced similar problems with estimating meaningful parameters in order to obtain more interpretable models. Nevertheless, results from these fields currently seem to draw little from each other. This part of the tutorial, tailored as an introduction, aims at providing system identification researchers with a foundational overview of the motivations behind eXplainable AI (XAI) and the main connections with their field.

Original languageEnglish (US)
Pages (from-to)502-507
Number of pages6
JournalIFAC-PapersOnLine
Volume58
Issue number15
DOIs
StatePublished - Jul 1 2024
Event20th IFAC Symposium on System Identification, SYSID 2024 - Boston, United States
Duration: Jul 17 2024Jul 19 2024

Bibliographical note

Publisher Copyright:
© 2024 The Authors.

Keywords

  • eXplainable AI

Fingerprint

Dive into the research topics of 'Explainable AI: motivations and connections with system identification'. Together they form a unique fingerprint.

Cite this