In this paper, we consider the problem of designing a controller that minimizes the worst case peak-to-peak gain of the closed-loop system. In particular, we concentrate on the case where the controller has access to the state of a linear plant and it possibly knows the maximal disturbance input amplitude. We apply the principle of optimality and derive a dynamic programming formulation of the optimization problem. Under mild assumptions, we show that, at each step of the dynamic program, the cost to go has the form of a gauge function and can be recursively determined through simple transformations. We study both the finite horizon and the infinite horizon case under different information structures. The proposed approach allows us to encompass and improve the recent results based on viability theory. In particular, we present a computational scheme alternative to the standard bisection algorithm, or gamma iteration, that allows us to compute the exact value of the worst case peak-to-peak gain for any finite horizon. We show that the sequence of finite horizon optimal costs converges, as the length of the horizon goes to infinity, to the infinite horizon optimal cost. The sequence of such optimal costs converges from below to the optimal performance for the infinite horizon problem. We also show the existence of an optimal state feedback strategy that is globally exponentially stabilizing and derive suboptimal globally exponentially stabilizing strategies from the solutions of finite horizon problems.
Bibliographical noteFunding Information:
Manuscript received September 21, 1996; revised December 23, 1998. Recommended by Associate Editor, J. Shamma. This work was supported by the NSF under Grant 9157306-ECS, Draper Laboratories under Grant DL-H-441636, and AFOSR under Grant F49620-95-0219. The authors are with the Department of Electrical Engineering and Computer Science, Massachusetts Institute of Technology, Cambridge, MA 02139 USA. Publisher Item Identifier S 0018-9286(00)03239-6.