While continuous progress in the area of machine learning is producing algorithms capable of achieving better and better decision and predictive performance, the way such algorithms operate is also becoming more and more inscrutable. When an increasing amount of decisions is being ceded to often inexplicable algorithms which are not susceptible to any form of human supervision or scrutiny, it is just natural to start raising doubts about their fairness, soundness, and reliability. This has motivated a growing need for tools capable of disentangling and explaining the mechanisms behind AI based decisions, creating a new field of research referred to as eXplainable AI (XAI). Given the significant impact that machine learning is having also on the area of estimation and control, this article advances the idea of borrowing and adapting methodologies from the area of XAI and apply them to estimation and control algorithms involving networks of dynamic processes. Specifically, we translate the methodology known as Local Interpretable Model-Agnostic Explanations (LIME) in order to explain the mechanisms behind a black-box estimation algorithm processing time-series. Furthermore, we find that LIME can be extended using notions of causal inference to detect cause-effect relations among the features that the estimation algorithm takes as inputs. This causal inference procedure provides LIME with additional explanatory power.
|Original language||English (US)|
|Title of host publication||2023 American Control Conference, ACC 2023|
|Publisher||Institute of Electrical and Electronics Engineers Inc.|
|Number of pages||6|
|State||Published - 2023|
|Event||2023 American Control Conference, ACC 2023 - San Diego, United States|
Duration: May 31 2023 → Jun 2 2023
|Name||2023 American Control Conference (ACC)|
|Conference||2023 American Control Conference, ACC 2023|
|Period||5/31/23 → 6/2/23|
Bibliographical notePublisher Copyright:
© 2023 American Automatic Control Council.