This paper proposes and analyzes a communication-efficient distributed optimization framework for general nonconvex nonsmooth signal processing and machine learning problems under an asynchronous protocol. At each iteration, worker machines compute gradients of a known empirical loss function using their own local data, and a master machine solves a related minimization problem to update the current estimate. We prove that for nonconvex nonsmooth problems, the proposed algorithm converges to a stationary point with a sublinear rate over the number of communication rounds, coinciding with the best theoretical rate that can be achieved for this class of problems. Linear convergence to a global minimum is established without any statistical assumptions on the local data for problems characterized by composite loss functions whose smooth part is strongly convex. Extensive numerical experiments verify that the performance of the proposed approach indeed improves - sometimes significantly - over other state-of-the-art algorithms in terms of total communication efficiency.
Bibliographical noteFunding Information:
Manuscript received October 26, 2019; revised March 29, 2020 and April 28, 2020; accepted May 14, 2020. Date of publication May 21, 2020; date of current version June 10, 2020. The associate editor coordinating the review of this manuscript and approving it for publication was Dr. Antti Tölli. This work was supported in part by the DARPA Young Faculty Award under Grant N66001-14-1-4047 and in part by the ADC Fellowship from the University of Minnesota Digital Technology Center. This article was presented in part at the Global Conference on Signal and Information Processing, 2018. (Corresponding author: Jineng Ren.) The authors are with the Department of Electrical and Computer Engineering, University of Minnesota, Minneapolis, MN 55455 USA (e-mail: firstname.lastname@example.org; email@example.com).
© 1991-2012 IEEE.
- distributed algorithm
- strongly convex