TY - JOUR
T1 - Improving the Sample and Communication Complexity for Decentralized Non-Convex Optimization
T2 - 37th International Conference on Machine Learning, ICML 2020
AU - Sun, Haoran
AU - Lu, Songtao
AU - Hong, Mingyi
N1 - Publisher Copyright:
© 2020 by the author(s).
PY - 2020
Y1 - 2020
N2 - Many modern large-scale machine learning problems benefit from decentralized and stochastic optimization. Recent works have shown that utilizing both decentralized computing and local stochastic gradient estimates can outperform state-of-the-art centralized algorithms, in applications involving highly non-convex problems, such as training deep neural networks. In this work, we propose a decentralized stochastic algorithm to deal with certain smooth non-convex problems where there are m nodes in the system, and each node has a large number of samples (denoted as n). Differently from the majority of the existing decentralized learning algorithms for either stochastic or finite-sum problems, our focus is given to both reducing the total communication rounds among the nodes, while accessing the minimum number of local data samples. In particular, we propose an algorithm named D-GET (decentralized gradient estimation and tracking), which jointly performs decentralized gradient estimation (which estimates the local gradient using a subset of local samples) and gradient tracking (which tracks the global full gradient using local estimates). We show that, to achieve certain ϵ stationary solution of the deterministic finite sum problem, the proposed algorithm achieves an O(mn1/2ϵ-1) sample complexity and an O(ϵ-1) communication complexity. These bounds significantly improve upon the best existing bounds of O(mnϵ-1) and O(ϵ-1), respectively. Similarly, for online problems, the proposed method achieves an O(mϵ-3/2) sample complexity and an O(ϵ-1) communication complexity.
AB - Many modern large-scale machine learning problems benefit from decentralized and stochastic optimization. Recent works have shown that utilizing both decentralized computing and local stochastic gradient estimates can outperform state-of-the-art centralized algorithms, in applications involving highly non-convex problems, such as training deep neural networks. In this work, we propose a decentralized stochastic algorithm to deal with certain smooth non-convex problems where there are m nodes in the system, and each node has a large number of samples (denoted as n). Differently from the majority of the existing decentralized learning algorithms for either stochastic or finite-sum problems, our focus is given to both reducing the total communication rounds among the nodes, while accessing the minimum number of local data samples. In particular, we propose an algorithm named D-GET (decentralized gradient estimation and tracking), which jointly performs decentralized gradient estimation (which estimates the local gradient using a subset of local samples) and gradient tracking (which tracks the global full gradient using local estimates). We show that, to achieve certain ϵ stationary solution of the deterministic finite sum problem, the proposed algorithm achieves an O(mn1/2ϵ-1) sample complexity and an O(ϵ-1) communication complexity. These bounds significantly improve upon the best existing bounds of O(mnϵ-1) and O(ϵ-1), respectively. Similarly, for online problems, the proposed method achieves an O(mϵ-3/2) sample complexity and an O(ϵ-1) communication complexity.
UR - https://www.scopus.com/pages/publications/105022591044
UR - https://www.scopus.com/pages/publications/105022591044#tab=citedBy
M3 - Conference article
AN - SCOPUS:105022591044
SN - 2640-3498
VL - 119
JO - Proceedings of Machine Learning Research
JF - Proceedings of Machine Learning Research
Y2 - 13 July 2020 through 18 July 2020
ER -