TY - JOUR
T1 - UNDERSTANDING A CLASS OF DECENTRALIZED AND FEDERATED OPTIMIZATION ALGORITHMS
T2 - A MULTIRATE FEEDBACK CONTROL PERSPECTIVE*
AU - Zhang, Xinwei
AU - Hong, Mingyi
AU - Elia, Nicola
N1 - Publisher Copyright:
© 2023 Society for Industrial and Applied Mathematics Publications. All rights reserved.
PY - 2023
Y1 - 2023
N2 - Distributed algorithms have been playing an increasingly important role in many applications such as machine learning, signal processing, and control. Significant research efforts have been devoted to developing and analyzing new algorithms for various applications. In this work, we provide a fresh perspective to understand, analyze, and design distributed optimization algorithms. Through the lens of multirate feedback control, we show that a wide class of distributed algorithms, including popular decentralized/federated schemes, can be viewed as discretizing a certain continuous-time feedback control system, possibly with multiple sampling rates, such as decentralized gradient descent, gradient tracking, and federated averaging. This key observation not only allows us to develop a generic framework to analyze the convergence of the entire algorithm class, but, more importantly, it also leads to an interesting way of designing new distributed algorithms. We develop the theory behind our framework and provide examples to highlight how the framework can be used in practice.
AB - Distributed algorithms have been playing an increasingly important role in many applications such as machine learning, signal processing, and control. Significant research efforts have been devoted to developing and analyzing new algorithms for various applications. In this work, we provide a fresh perspective to understand, analyze, and design distributed optimization algorithms. Through the lens of multirate feedback control, we show that a wide class of distributed algorithms, including popular decentralized/federated schemes, can be viewed as discretizing a certain continuous-time feedback control system, possibly with multiple sampling rates, such as decentralized gradient descent, gradient tracking, and federated averaging. This key observation not only allows us to develop a generic framework to analyze the convergence of the entire algorithm class, but, more importantly, it also leads to an interesting way of designing new distributed algorithms. We develop the theory behind our framework and provide examples to highlight how the framework can be used in practice.
KW - control perspective
KW - convergence analysis
KW - distributed algorithms
UR - http://www.scopus.com/inward/record.url?scp=85166327414&partnerID=8YFLogxK
UR - http://www.scopus.com/inward/citedby.url?scp=85166327414&partnerID=8YFLogxK
U2 - 10.1137/22M1475648
DO - 10.1137/22M1475648
M3 - Article
AN - SCOPUS:85166327414
SN - 1052-6234
VL - 33
SP - 652
EP - 683
JO - SIAM Journal on Optimization
JF - SIAM Journal on Optimization
IS - 2
ER -