Distributed learning has become a critical enabler of the massively connected world that many people envision. This article discusses four key elements of scalable distributed processing and real-Time intelligence: problems, data, communication, and computation. Our aim is to provide a unique perspective of how these elements should work together in an effective and coherent manner. In particular, we selectively review recent techniques developed for optimizing nonconvex models (i.e., problem classes) that process batch and streaming data (data types) across networks in a distributed manner (communication and computation paradigm). We describe the intuitions and connections behind a core set of popular distributed algorithms, emphasizing how to balance computation and communication costs. Practical issues and future research directions will also be discussed.
Bibliographical noteFunding Information:
We would like to thank the anonymous reviewers as well as Dr. Gesualdo Scutari and Dr. Angelia Nedić for helpful comments that significantly improved the quality of the article. Tsung-Hui Chang was supported, in part, by the National Key R&D Program of China (grant 2018YFB1800800), National Natural Science Foundation of China (grant 61731018), and Shenzhen Fundamental Research Fund (grants JCYJ20190813171003723 and KQTD2015033114415450). Hoi-To Wai was supported by the Chinese University of Hong Kong (direct grant 4055113). Min-gyi Hong, Songtao Lu, and Xinwei Zhang were supported, in part, by the National Science Foundation (grants CMMI-172775 and CIF-1910385) and Army Research Office (grant 73202-CS). This work was done when Songtao Lu was a postdoctoral fellow at the University of Minnesota.
© 1991-2012 IEEE.
Copyright 2020 Elsevier B.V., All rights reserved.