TY - JOUR
T1 - Asynchronous Distributed ADMM for Large-Scale Optimization - Part II
T2 - Linear Convergence Analysis and Numerical Performance
AU - Chang, Tsung Hui
AU - Liao, Wei Cheng
AU - Hong, Mingyi
AU - Wang, Xiangfeng
N1 - Publisher Copyright:
© 1991-2012 IEEE.
PY - 2016/6/15
Y1 - 2016/6/15
N2 - The alternating direction method of multipliers (ADMM) has been recognized as a versatile approach for solving modern large-scale machine learning and signal processing problems efficiently. When the data size and/or the problem dimension is large, a distributed version of ADMM can be used, which is capable of distributing the computation load and the data set to a network of computing nodes. Unfortunately, a direct synchronous implementation of such algorithm does not scale well with the problem size, as the algorithm speed is limited by the slowest computing nodes. To address this issue, in a companion paper, we have proposed an asynchronous distributed ADMM (AD-ADMM) and studied its worst-case convergence conditions. In this paper, we further the study by characterizing the conditions under which the AD-ADMM achieves linear convergence. Our conditions as well as the resulting linear rates reveal the impact that various algorithm parameters, network delay, and network size have on the algorithm performance. To demonstrate the superior time efficiency of the proposed AD-ADMM, we test the AD-ADMM on a high-performance computer cluster by solving a large-scale logistic regression problem.
AB - The alternating direction method of multipliers (ADMM) has been recognized as a versatile approach for solving modern large-scale machine learning and signal processing problems efficiently. When the data size and/or the problem dimension is large, a distributed version of ADMM can be used, which is capable of distributing the computation load and the data set to a network of computing nodes. Unfortunately, a direct synchronous implementation of such algorithm does not scale well with the problem size, as the algorithm speed is limited by the slowest computing nodes. To address this issue, in a companion paper, we have proposed an asynchronous distributed ADMM (AD-ADMM) and studied its worst-case convergence conditions. In this paper, we further the study by characterizing the conditions under which the AD-ADMM achieves linear convergence. Our conditions as well as the resulting linear rates reveal the impact that various algorithm parameters, network delay, and network size have on the algorithm performance. To demonstrate the superior time efficiency of the proposed AD-ADMM, we test the AD-ADMM on a high-performance computer cluster by solving a large-scale logistic regression problem.
KW - ADMM
KW - Distributed optimization
KW - asynchronous
KW - consensus optimization
UR - http://www.scopus.com/inward/record.url?scp=84964671262&partnerID=8YFLogxK
UR - http://www.scopus.com/inward/citedby.url?scp=84964671262&partnerID=8YFLogxK
U2 - 10.1109/TSP.2016.2537261
DO - 10.1109/TSP.2016.2537261
M3 - Article
AN - SCOPUS:84964671262
SN - 1053-587X
VL - 64
SP - 3131
EP - 3144
JO - IEEE Transactions on Signal Processing
JF - IEEE Transactions on Signal Processing
IS - 12
M1 - 7423796
ER -