Abstract
As Graph Neural Networks (GNNs) have been widely used in real-world applications, model explanations are required not only by users but also by legal regulations. However, simultaneously achieving high fidelity and low computational costs in generating explanations has been a challenge for current methods. In this work, we propose a framework of GNN explanation named LeArn Removal-based Attribution (LARA) to address this problem. Specifically, we introduce removal-based attribution and demonstrate its substantiated link to interpretability fidelity theoretically and experimentally. The explainer in LARA learns to generate removal-based attribution which enables providing explanations with high fidelity. A strategy of subgraph sampling is designed in LARA to improve the scalability of the training process. In the deployment, LARA can efficiently generate the explanation through a feed-forward pass. We benchmark our approach with other state-of-the-art GNN explanation methods on six datasets. Results highlight the effectiveness of our framework regarding both efficiency and fidelity. In particular, LARA is 3.1 faster and achieves higher fidelity than the state-of-the-art method on the large dataset ogbn-arxiv (more than 160K nodes and 1M edges), showing its great potential in real-world applications. Our source code is available at https://github.com/yaorong0921/LARA.
Original language | English (US) |
---|---|
Article number | 42 |
Journal | ACM Transactions on Knowledge Discovery from Data |
Volume | 19 |
Issue number | 2 |
DOIs | |
State | Published - Feb 14 2025 |
Externally published | Yes |
Bibliographical note
Publisher Copyright:© 2025 Copyright held by the owner/author(s). Publication rights licensed to ACM.
Keywords
- Efficient XAI
- GNN Explanation
- XAI