Abstract
This work proposes a new algorithm – the Single-timescale Double-momentum Stochastic Approximation (SUSTAIN) – for tackling stochastic unconstrained bilevel optimization problems. We focus on bilevel problems where the lower level subproblem is strongly-convex and the upper level objective function is smooth. Unlike prior works which rely on two-timescale or double loop techniques, we design a stochastic momentum-assisted gradient estimator for both the upper and lower level updates. The latter allows us to control the error in the stochastic gradient updates due to inaccurate solution to both subproblems. If the upper objective function is smooth but possibly non-convex, we show that SUSTAIN requires O(ε-3/2) iterations (each using O(1) samples) to find an ε-stationary solution. The ε-stationary solution is defined as the point whose squared norm of the gradient of the outer function is less than or equal to ε. The total number of stochastic gradient samples required for the upper and lower level objective functions match the best-known complexity for single-level stochastic gradient algorithms. We also analyze the case when the upper level objective function is strongly-convex.
Original language | English (US) |
---|---|
Title of host publication | Advances in Neural Information Processing Systems 34 - 35th Conference on Neural Information Processing Systems, NeurIPS 2021 |
Editors | Marc'Aurelio Ranzato, Alina Beygelzimer, Yann Dauphin, Percy S. Liang, Jenn Wortman Vaughan |
Publisher | Neural information processing systems foundation |
Pages | 30271-30283 |
Number of pages | 13 |
ISBN (Electronic) | 9781713845393 |
State | Published - 2021 |
Event | 35th Conference on Neural Information Processing Systems, NeurIPS 2021 - Virtual, Online Duration: Dec 6 2021 → Dec 14 2021 |
Publication series
Name | Advances in Neural Information Processing Systems |
---|---|
Volume | 36 |
ISSN (Print) | 1049-5258 |
Conference
Conference | 35th Conference on Neural Information Processing Systems, NeurIPS 2021 |
---|---|
City | Virtual, Online |
Period | 12/6/21 → 12/14/21 |
Bibliographical note
Funding Information:We thank the anonymous reviewers for their valuable comments and suggestions. The work of Prashant Khanduri, Siliang Zeng, and Mingyi Hong was supported by the National Science Foundation (NSF) through grant CIF-1910385. The work of Mingyi Hong was also supported by an IBM Faculty Research award. The work of Hoi-To Wai was supported by CUHK Direct Grant #4055113. Zhaoran Wang acknowledges National Science Foundation (Awards 2048075, 2008827, 2015568, 1934931), Simons Institute (Theory of Reinforcement Learning), Amazon, J.P. Morgan, and Two Sigma for their support. Zhuoran Yang acknowledges Simons Institute (Theory of Reinforcement Learning).
Publisher Copyright:
© 2021 Neural information processing systems foundation. All rights reserved.