## Abstract

We consider finite- and infinite-horizon Markov decision processes (MDPs) with unknown state-transition probabilities. They are assumed to belong to certain ambiguity sets, and the goal is to maximize the worst-case expected total discounted reward over all probabilities from these sets. Specifically, the ambiguity set for any state-action-stage triplet is a ball-it includes all probability mass functions (pmfs) within a certain distance from the empirical pmf constructed using historical, independent observations of state transitions. We prove that optimal values in the resulting robust MDPs (RMDPs) converge to the optimal value of the true MDP if the radii of the ambiguity balls approach zero as the sample-size diverges to infinity. In addition, robust optimal policies for sufficiently large sample-sizes are optimal to the true MDP. These results rely on a sufficient condition that links convergence of pmfs with respect to the distance function with their componentwise convergence in an appropriate space. Further, for finite sample-sizes, the optimal value of the RMDP provides a lower bound on the value of the robust optimal policy in the true MDP, with a high probability. A certain concentration inequality is sufficient for this out-of-sample performance guarantee. Several well-known distances satisfy these conditions. Numerical experiments suggest that one can choose from several distance functions to build computationally tractable RMDPs that exhibit good out-of-sample performance, and balance conservativeness with probabilistic guarantees.

Original language | English (US) |
---|---|

Journal | SIAM Journal on Optimization |

Volume | 32 |

Issue number | 2 |

DOIs | |

State | Published - 2022 |

Externally published | Yes |

### Bibliographical note

Publisher Copyright:© 2022 Society for Industrial and Applied Mathematics.

## Keywords

- distributionally robust optimization
- dynamic programming
- probabilistic performance guarantees
- value convergence