Abstract
We study the finite-sample performance of batch actor-critic algorithm for reinforcement learning with nonlinear function approximations. Specifically, in the critic step, we estimate the action-value function corresponding to the policy of the actor within some parametrized function class, while in the actor step, the policy is updated using the policy gradient estimated based on the critic, so as to minimize the objective function defined as the expected value of discounted cumulative rewards. Under this setting, for the parameter sequence created by the actor steps, we show that the gradient norm of the objective function at any limit point is close to zero up to some fundamental error. In particular, we show that the error corresponds to the statistical rate of policy evaluation with nonlinear function approximations. For the special class of linear functions and when the number of samples goes to infinity, our result recovers the classical convergence results for the online actor-critic algorithm, which is based on the asymptotic behavior of two-time-scale stochastic approximation.
Original language | English (US) |
---|---|
Title of host publication | 2018 IEEE Conference on Decision and Control, CDC 2018 |
Publisher | Institute of Electrical and Electronics Engineers Inc. |
Pages | 2759-2764 |
Number of pages | 6 |
ISBN (Electronic) | 9781538613955 |
DOIs | |
State | Published - Jul 2 2018 |
Event | 57th IEEE Conference on Decision and Control, CDC 2018 - Miami, United States Duration: Dec 17 2018 → Dec 19 2018 |
Publication series
Name | Proceedings of the IEEE Conference on Decision and Control |
---|---|
Volume | 2018-December |
ISSN (Print) | 0743-1546 |
ISSN (Electronic) | 2576-2370 |
Conference
Conference | 57th IEEE Conference on Decision and Control, CDC 2018 |
---|---|
Country/Territory | United States |
City | Miami |
Period | 12/17/18 → 12/19/18 |
Bibliographical note
Publisher Copyright:© 2018 IEEE.