Understanding Inverse Reinforcement Learning under Overparameterization: Non-Asymptotic Analysis and Global Optimality

  • Ruijia Zhang
  • , Siliang Zeng
  • , Chenliang Li
  • , Alfredo Garcia
  • , Mingyi Hong

Research output: Contribution to journalConference articlepeer-review

Abstract

The goal of the Inverse reinforcement learning (IRL) task is to identify the underlying reward function and the corresponding optimal policy from a set of expert demonstrations. While most IRL algorithms' theoretical guarantees rely on a linear reward structure, we aim to extend the theoretical understanding of IRL to scenarios where the reward function is parameterized by neural networks. Meanwhile, conventional IRL algorithms usually adopt a nested structure, leading to computational ine!ciency, especially in high-dimensional settings. To address this problem, we propose the first two-timescale single-loop IRL algorithm under neural network parameterized reward and provide a non-asymptotic convergence analysis under overparameterization. Although prior optimality results for linear rewards do not apply, we show that our algorithm can identify the globally optimal reward and policy under certain neural network structures. This is the first IRL algorithm with a non-asymptotic convergence guarantee that provably achieves global optimality in neural network settings.

Original languageEnglish (US)
Pages (from-to)2944-2952
Number of pages9
JournalProceedings of Machine Learning Research
Volume258
StatePublished - 2025
Event28th International Conference on Artificial Intelligence and Statistics, AISTATS 2025 - Mai Khao, Thailand
Duration: May 3 2025May 5 2025

Bibliographical note

Publisher Copyright:
Copyright 2025 by the author(s).

Fingerprint

Dive into the research topics of 'Understanding Inverse Reinforcement Learning under Overparameterization: Non-Asymptotic Analysis and Global Optimality'. Together they form a unique fingerprint.

Cite this