TY - JOUR
T1 - Hybrid ADMM
T2 - a unifying and fast approach to decentralized optimization
AU - Ma, Meng
AU - Nikolakopoulos, Athanasios N.
AU - Giannakis, Georgios B.
N1 - Publisher Copyright:
© 2018, The Author(s).
PY - 2018/12/1
Y1 - 2018/12/1
N2 - The present work introduces the hybrid consensus alternating direction method of multipliers (H-CADMM), a novel framework for optimization over networks which unifies existing distributed optimization approaches, including the centralized and the decentralized consensus ADMM. H-CADMM provides a flexible tool that leverages the underlying graph topology in order to achieve a desirable sweet spot between node-to-node communication overhead and rate of convergence—thereby alleviating known limitations of both C-CADMM and D-CADMM. A rigorous analysis of the novel method establishes linear convergence rate and also guides the choice of parameters to optimize this rate. The novel hybrid update rules of H-CADMM lend themselves to “in-network acceleration” that is shown to effect considerable—and essentially “free-of-charge”—performance boost over the fully decentralized ADMM. Comprehensive numerical tests validate the analysis and showcase the potential of the method in tackling efficiently, widely useful learning tasks.
AB - The present work introduces the hybrid consensus alternating direction method of multipliers (H-CADMM), a novel framework for optimization over networks which unifies existing distributed optimization approaches, including the centralized and the decentralized consensus ADMM. H-CADMM provides a flexible tool that leverages the underlying graph topology in order to achieve a desirable sweet spot between node-to-node communication overhead and rate of convergence—thereby alleviating known limitations of both C-CADMM and D-CADMM. A rigorous analysis of the novel method establishes linear convergence rate and also guides the choice of parameters to optimize this rate. The novel hybrid update rules of H-CADMM lend themselves to “in-network acceleration” that is shown to effect considerable—and essentially “free-of-charge”—performance boost over the fully decentralized ADMM. Comprehensive numerical tests validate the analysis and showcase the potential of the method in tackling efficiently, widely useful learning tasks.
KW - ADMM
KW - Consensus
KW - Decentralized learning
KW - Distributed optimization
KW - Hybrid
UR - http://www.scopus.com/inward/record.url?scp=85058841590&partnerID=8YFLogxK
UR - http://www.scopus.com/inward/citedby.url?scp=85058841590&partnerID=8YFLogxK
U2 - 10.1186/s13634-018-0589-x
DO - 10.1186/s13634-018-0589-x
M3 - Article
AN - SCOPUS:85058841590
SN - 1687-6172
VL - 2018
JO - Eurasip Journal on Advances in Signal Processing
JF - Eurasip Journal on Advances in Signal Processing
IS - 1
M1 - 73
ER -