Structural Estimation of Markov Decision Processes in High-Dimensional State Space with Finite-Time Guarantees

Siliang Zeng, Mingyi Hong, Alfredo Garcia

Research output: Contribution to journalArticlepeer-review

Abstract

We consider the task of estimating a structural model of dynamic decisions by a human agent based on the observable history of implemented actions and visited states. This problem has an inherent nested structure: In the inner problem, an optimal policy for a given reward function is identified, whereas in the outer problem, a measure of fit is maximized. Several approaches have been proposed to alleviate the computational burden of this nested-loop structure, but these methods still suffer from high complexity when the state space is either discrete with large cardinality or continuous in high dimensions. Other approaches in the inverse reinforcement learning literature emphasize policy estimation at the expense of reduced reward estimation accuracy. In this paper, we propose a single-loop estimation algorithm with finite time guarantees that is equipped to deal with high-dimensional state spaces without compromising reward estimation accuracy. In the proposed algorithm, each policy improvement step is followed by a stochastic gradient step for likelihood maximization. We show the proposed algorithm converges to a stationary solution with a finite-time guarantee. Further, if the reward is parameterized linearly, the algorithm approximates the maximum likelihood estimator sublinearly.

Original languageEnglish (US)
Pages (from-to)720-737
Number of pages18
JournalOperations research
Volume73
Issue number2
DOIs
StatePublished - Mar 2025

Bibliographical note

Publisher Copyright:
© 2024 INFORMS.

Keywords

  • dynamic discrete choice model
  • inverse reinforcement learning

Fingerprint

Dive into the research topics of 'Structural Estimation of Markov Decision Processes in High-Dimensional State Space with Finite-Time Guarantees'. Together they form a unique fingerprint.

Cite this