Abstract
Dynamic games arise when multiple agents with differing objectives control a dynamic system. They model a wide variety of applications in economics, defense, energy systems and etc. However, compared to single-agent control problems, the computational methods for dynamic games are relatively limited. As in the single-agent case, only specific dynamic games can be solved exactly, so approximation algorithms are required. In this paper, we show how to extend the Newton step algorithm, the Bellman recursion and the popular differential dynamic programming (DDP) for single-agent optimal control to the case of full-information nonzero sum dynamic games. We show that the Newton’s step can be solved in a computationally efficient manner and inherits its original quadratic convergence rate to open-loop Nash equilibria, and that the approximated Bellman recursion and DDP methods are very similar and can be used to find local feedback O(ε2) -Nash equilibria. Numerical examples are provided.
Original language | English (US) |
---|---|
Pages (from-to) | 394-442 |
Number of pages | 49 |
Journal | Dynamic Games and Applications |
Volume | 12 |
Issue number | 2 |
DOIs | |
State | Published - Jun 2022 |
Bibliographical note
Publisher Copyright:© 2021, The Author(s), under exclusive licence to Springer Science+Business Media, LLC, part of Springer Nature.
Keywords
- Differential dynamic programming
- Feedback Nash equilibrium
- Newton’s method
- Noncooperative dynamic games
- Open-loop Nash equilibrium