Abstract
This article introduces a class of finite-horizon dynamic optimization problems that are called multi-action stochastic Dynamic Programs (DPs). Their distinguishing feature is that the decision in each state is a multi-dimensional vector. These problems can in principle be solved using Bellmans backward recursion. However, the complexity of this procedure grows exponentially in the dimension of the decision vectors. This is called the curse of action space dimensionality. To overcome this computational challenge, an approximation algorithm is proposed that is rooted in the game-theoretic paradigm of Sampled Fictitious Play (SFP). SFP solves a sequence of DPs with a one-dimensional action space that are exponentially smaller than the original multi-action stochastic DP. In particular, the computational effort in a fixed number of SFP iterations is linear in the dimension of the decision vectors. It is shown that the sequence of SFP iterates converges to a local optimum, and a numerical case study in manufacturing is presented in which SFP is able to find solutions with objective values within 1% of the optimal objective value hundreds of times faster than the time taken by backward recursion. In this case study, SFP solutions are also better by a statistically significant margin than those found by a one-step look ahead heuristic.
Original language | English (US) |
---|---|
Pages (from-to) | 742-756 |
Number of pages | 15 |
Journal | IIE Transactions (Institute of Industrial Engineers) |
Volume | 46 |
Issue number | 7 |
DOIs | |
State | Published - Jul 3 2014 |
Externally published | Yes |
Keywords
- Approximate dynamic programming
- game theory
- operations management