Hybrid-order distributed SGD: Balancing communication overhead, computational complexity, and convergence rate for distributed learning

Naeimeh Omidvar, Seyed Mohammad Hosseini, Mohammad Ali Maddah-Ali

Research output: Contribution to journalArticlepeer-review

Abstract

Communication overhead, computation load, and convergence speed are three major challenges in the scalability of distributed stochastic optimization algorithms in training large neural networks. In this paper, we propose the approach of hybrid-order distributed stochastic gradient descent (HO-SGD) which strikes a better balance between these three than the previous methods, for a general class of non-convex stochastic optimization problems. In particular, we advocate that by properly interleaving zeroth-order and first-order gradient updates, it is possible to significantly reduce the communication and computation overheads while guaranteeing a fast convergence. The proposed method guarantees the same order of convergence rate as in the fastest distributed methods (i.e., fully synchronous SGD) while having significantly less computational complexity and communication overhead per iteration, and the same order of communication overhead as in the state-of-the-art communication-efficient methods, with order-wisely less computational complexity. Moreover, it order-wisely improves the convergence rate of zeroth-order SGD methods. Finally and remarkably, empirical studies demonstrate that the proposed hybrid-order approach provides significantly higher test accuracies and superior generalization than all the baselines, owing to its novel exploration mechanism.

Original languageEnglish (US)
Article number128020
JournalNeurocomputing
Volume599
DOIs
StatePublished - Sep 28 2024
Externally publishedYes

Bibliographical note

Publisher Copyright:
© 2024 Elsevier B.V.

Keywords

  • Communication overhead
  • Computational complexity
  • Convergence rate
  • Distributed learning
  • Distributed optimization
  • Generalization
  • Non-convex
  • Stochastic optimization

Fingerprint

Dive into the research topics of 'Hybrid-order distributed SGD: Balancing communication overhead, computational complexity, and convergence rate for distributed learning'. Together they form a unique fingerprint.

Cite this