We consider solving a convex, nonsmooth and stochastic optimization problem over a multi-agent network. Each agent has access to a local objective function and can communicate with its immediate neighbors only. We develop a dynamic stochastic proximal-gradient consensus (DySPGC) algorithm, featuring: i) it works for both the static and randomly time-varying networks; ii) it can deal with either the exact or the stochastic gradient information; iii) it has provable rate of convergence. Interestingly, the developed algorithm includes as special cases many existing (and seemingly unrelated) first-order algorithms for distributed optimization over static networks, such as the EXTRA (Shi et al 2014), the PG-EXTRA (Shi at 2015), the IC/IDC-ADMM (Chang et al 2014), and the DLM (Ling et al 2015). It is also closely related to the classical distributed gradient method.
|Original language||English (US)|
|Title of host publication||2016 IEEE International Conference on Acoustics, Speech and Signal Processing, ICASSP 2016 - Proceedings|
|Publisher||Institute of Electrical and Electronics Engineers Inc.|
|Number of pages||5|
|State||Published - May 18 2016|
|Event||41st IEEE International Conference on Acoustics, Speech and Signal Processing, ICASSP 2016 - Shanghai, China|
Duration: Mar 20 2016 → Mar 25 2016
|Name||ICASSP, IEEE International Conference on Acoustics, Speech and Signal Processing - Proceedings|
|Other||41st IEEE International Conference on Acoustics, Speech and Signal Processing, ICASSP 2016|
|Period||3/20/16 → 3/25/16|
Bibliographical noteFunding Information:
M. Hong is supported by NSF, Grant No. CCF-1526078. T.-H. Chang is supported by NSFC, China, Grant No. 61571385.
© 2016 IEEE.
- Consensus optimization
- alternating direction method of multipliers
- stochastic optimization