Abstract
We consider the task of estimating a structural model of dynamic decisions by a human agent based on the observable history of implemented actions and visited states. This problem has an inherent nested structure: In the inner problem, an optimal policy for a given reward function is identified, whereas in the outer problem, a measure of fit is maximized. Several approaches have been proposed to alleviate the computational burden of this nested-loop structure, but these methods still suffer from high complexity when the state space is either discrete with large cardinality or continuous in high dimensions. Other approaches in the inverse reinforcement learning literature emphasize policy estimation at the expense of reduced reward estimation accuracy. In this paper, we propose a single-loop estimation algorithm with finite time guarantees that is equipped to deal with high-dimensional state spaces without compromising reward estimation accuracy. In the proposed algorithm, each policy improvement step is followed by a stochastic gradient step for likelihood maximization. We show the proposed algorithm converges to a stationary solution with a finite-time guarantee. Further, if the reward is parameterized linearly, the algorithm approximates the maximum likelihood estimator sublinearly.
Original language | English (US) |
---|---|
Pages (from-to) | 720-737 |
Number of pages | 18 |
Journal | Operations research |
Volume | 73 |
Issue number | 2 |
DOIs | |
State | Published - Mar 2025 |
Bibliographical note
Publisher Copyright:© 2024 INFORMS.
Keywords
- dynamic discrete choice model
- inverse reinforcement learning