Explaining complex systems: a tutorial on transparency and interpretability in machine learning models (part II)

Donatello Materassi, Sean Warnick, Cristian Rojas, Maarten Schoukens, Elizabeth Cross

Research output: Contribution to journalConference articlepeer-review

Abstract

In the second segment of the tutorial, we transition from the granularity of local interpretability to a broader exploration of eXplainable AI (XAI) methods. Building on the specific focus of the first part, which delved into Local Interpretable Model-Agnostic Explanations (LIME) and SHapley Additive exPlanations (SHAP), this section takes a more expansive approach. We will navigate through various XAI techniques of more global nature, covering counterfactual explanations, equation discovery, and the integration of physics-informed AI. Unlike the initial part, which concentrated on two specific methods, this section offers a general overview of these broader classes of techniques for explanation. The objective is to provide participants with a comprehensive understanding of the diverse strategies available for making complex machine learning models interpretable on a more global scale.

Original languageEnglish (US)
Pages (from-to)497-501
Number of pages5
JournalIFAC-PapersOnLine
Volume58
Issue number15
DOIs
StatePublished - Jul 1 2024
Event20th IFAC Symposium on System Identification, SYSID 2024 - Boston, United States
Duration: Jul 17 2024Jul 19 2024

Bibliographical note

Publisher Copyright:
© 2024 The Authors.

Keywords

  • eXplainable AI

Fingerprint

Dive into the research topics of 'Explaining complex systems: a tutorial on transparency and interpretability in machine learning models (part II)'. Together they form a unique fingerprint.

Cite this