Abstract
We consider solving a convex optimization problem with possibly stochastic gradient, and over a randomly time-varying multiagent network. Each agent has access to some local objective function, and it only has unbiased estimates of the gradients of the smooth component. We develop a dynamic stochastic proximal-gradient consensus algorithm, with the following key features: 1) it works for both the static and certain randomly time-varying networks; 2) it allows the agents to utilize either the exact or stochastic gradient information; 3) it is convergent with provable rate. In particular, the proposed algorithm converges to a global optimal solution, with a rate of \mathcal O(1/r) [resp. \mathcal O(1/r) when the exact (resp. stochastic) gradient is available, where r is the iteration counter. Interestingly, the developed algorithm establishes a close connection among a number of (seemingly unrelated) distributed algorithms, such as the EXTRA, the PG-EXTRA, the IC/IDC-ADMM, the DLM, and the classical distributed subgradient method.
| Original language | English (US) |
|---|---|
| Article number | 7862886 |
| Pages (from-to) | 2933-2948 |
| Number of pages | 16 |
| Journal | IEEE Transactions on Signal Processing |
| Volume | 65 |
| Issue number | 11 |
| DOIs | |
| State | Published - Jun 1 2017 |
Bibliographical note
Publisher Copyright:© 2017 IEEE.
Keywords
- ADMM
- Distributed optimization
- fast algorithms
- rate analysis