Improving the sample and communication complexity for decentralized non-convex optimization: Joint gradient estimation and tracking

Haoran Sun, Songtao Lu, Mingyi Hong

Research output: Chapter in Book/Report/Conference proceedingConference contribution

2 Scopus citations

Abstract

Many modern large-scale machine learning problems benefit from decentralized and stochastic optimization. Recent works have shown that utilizing both decentralized computing and local stochastic gradient estimates can outperform stateof-the-art centralized algorithms, in applications involving highly non-convex problems, such as training deep neural networks. In this work, we propose a decentralized stochastic algorithm to deal with certain smooth non-convex problems where there are m nodes in the system, and each node has a large number of samples (denoted as n). Differently from the majority of the existing decentralized learning algorithms for either stochastic or finite-sum problems, our focus is given to both reducing the total communication rounds among the nodes, while accessing the minimum number of local data samples. In particular, we propose an algorithm named D-GET (decentralized gradient estimation and tracking), which jointly performs decentralized gradient estimation (which estimates the local gradient using a subset of local samples) and gradient tracking (which tracks the global full gradient using local estimates). We show that, to achieve certain E stationary solution of the deterministic finite sum problem, the proposed algorithm achieves an O(mn1/2?-1) sample complexity and an O(E-1) communication complexity. These bounds significantly improve upon the best existing bounds of O(mn?-1) and O(?-1), respectively. Similarly, for online problems, the proposed method achieves an O(m?-3/2) sample complexity and an O(E-1) communication complexity.

Original languageEnglish (US)
Title of host publication37th International Conference on Machine Learning, ICML 2020
EditorsHal Daume, Aarti Singh
PublisherInternational Machine Learning Society (IMLS)
Pages9154-9165
Number of pages12
ISBN (Electronic)9781713821120
StatePublished - 2020
Event37th International Conference on Machine Learning, ICML 2020 - Virtual, Online
Duration: Jul 13 2020Jul 18 2020

Publication series

Name37th International Conference on Machine Learning, ICML 2020
VolumePartF168147-12

Conference

Conference37th International Conference on Machine Learning, ICML 2020
CityVirtual, Online
Period7/13/207/18/20

Bibliographical note

Funding Information:
The authors were supported by NSF under the grant CIF-1910385 and in part by an AFOSR grant 19RT0424, and an ARO grant W911NF-19-1-0247.

Publisher Copyright:
© 2020 37th International Conference on Machine Learning, ICML 2020. All rights reserved.

Fingerprint

Dive into the research topics of 'Improving the sample and communication complexity for decentralized non-convex optimization: Joint gradient estimation and tracking'. Together they form a unique fingerprint.

Cite this