Abstract
This paper presents a new class of gradient methods for distributed machine learning that adaptively skip the gradient calculations to learn with reduced communication and computation. Simple rules are designed to detect slowly-varying gradients and, therefore, trigger the reuse of outdated gradients. The resultant gradient-based algorithms are termed Lazily Aggregated Gradient - justifying our acronym LAG used henceforth. Theoretically, the merits of this contribution are: i) the convergence rate is the same as batch gradient descent in strongly-convex, convex, and nonconvex cases; and, ii) if the distributed datasets are heterogeneous (quantified by certain measurable constants), the communication rounds needed to achieve a targeted accuracy are reduced thanks to the adaptive reuse of lagged gradients. Numerical experiments on both synthetic and real data corroborate a significant communication reduction compared to alternatives.
| Original language | English (US) |
|---|---|
| Pages (from-to) | 5050-5060 |
| Number of pages | 11 |
| Journal | Advances in Neural Information Processing Systems |
| Volume | 2018-December |
| State | Published - 2018 |
| Event | 32nd Conference on Neural Information Processing Systems, NeurIPS 2018 - Montreal, Canada Duration: Dec 2 2018 → Dec 8 2018 |
Bibliographical note
Publisher Copyright:© 2018 Curran Associates Inc..All rights reserved.
Fingerprint
Dive into the research topics of 'LAG: Lazily aggregated gradient for communication-efficient distributed learning'. Together they form a unique fingerprint.Cite this
- APA
- Standard
- Harvard
- Vancouver
- Author
- BIBTEX
- RIS