Newton’s Method, Bellman Recursion and Differential Dynamic Programming for Unconstrained Nonlinear Dynamic Games

Research output: Contribution to journalArticlepeer-review

Abstract

Dynamic games arise when multiple agents with differing objectives control a dynamic system. They model a wide variety of applications in economics, defense, energy systems and etc. However, compared to single-agent control problems, the computational methods for dynamic games are relatively limited. As in the single-agent case, only specific dynamic games can be solved exactly, so approximation algorithms are required. In this paper, we show how to extend the Newton step algorithm, the Bellman recursion and the popular differential dynamic programming (DDP) for single-agent optimal control to the case of full-information nonzero sum dynamic games. We show that the Newton’s step can be solved in a computationally efficient manner and inherits its original quadratic convergence rate to open-loop Nash equilibria, and that the approximated Bellman recursion and DDP methods are very similar and can be used to find local feedback O(ε2) -Nash equilibria. Numerical examples are provided.

Original languageEnglish (US)
Pages (from-to)394-442
Number of pages49
JournalDynamic Games and Applications
Volume12
Issue number2
DOIs
StatePublished - Jun 2022

Bibliographical note

Publisher Copyright:
© 2021, The Author(s), under exclusive licence to Springer Science+Business Media, LLC, part of Springer Nature.

Keywords

  • Differential dynamic programming
  • Feedback Nash equilibrium
  • Newton’s method
  • Noncooperative dynamic games
  • Open-loop Nash equilibrium

Fingerprint

Dive into the research topics of 'Newton’s Method, Bellman Recursion and Differential Dynamic Programming for Unconstrained Nonlinear Dynamic Games'. Together they form a unique fingerprint.

Cite this