Abstract
The goal of the Inverse reinforcement learning (IRL) task is to identify the underlying reward function and the corresponding optimal policy from a set of expert demonstrations. While most IRL algorithms' theoretical guarantees rely on a linear reward structure, we aim to extend the theoretical understanding of IRL to scenarios where the reward function is parameterized by neural networks. Meanwhile, conventional IRL algorithms usually adopt a nested structure, leading to computational ine!ciency, especially in high-dimensional settings. To address this problem, we propose the first two-timescale single-loop IRL algorithm under neural network parameterized reward and provide a non-asymptotic convergence analysis under overparameterization. Although prior optimality results for linear rewards do not apply, we show that our algorithm can identify the globally optimal reward and policy under certain neural network structures. This is the first IRL algorithm with a non-asymptotic convergence guarantee that provably achieves global optimality in neural network settings.
| Original language | English (US) |
|---|---|
| Pages (from-to) | 2944-2952 |
| Number of pages | 9 |
| Journal | Proceedings of Machine Learning Research |
| Volume | 258 |
| State | Published - 2025 |
| Event | 28th International Conference on Artificial Intelligence and Statistics, AISTATS 2025 - Mai Khao, Thailand Duration: May 3 2025 → May 5 2025 |
Bibliographical note
Publisher Copyright:Copyright 2025 by the author(s).