Abstract
—Federated Learning (FL) is popular for communication-efficient learning from distributed data. To utilize data at different clients without moving them to the cloud, algorithms such as the Federated Averaging (FedAvg) have adopted a computation then aggregation model, in which multiple local updates are performed using local data before aggregation. These algorithms fail to work when faced with practical challenges, e.g., the local data being non-identically independently distributed. In this paper, we first characterize the behavior of the FedAvg algorithm, and show that without strong and unrealistic assumptions on the problem structure, it can behave erratically. Aiming at designing FL algorithms that are provably fast and require as few assumptions as possible, we propose a new algorithm design strategy from the primal-dual optimization perspective. Our strategy yields algorithms that can deal with non-convex objective functions, achieves the best possible optimization and communication complexity (in a well-defined sense), and accommodates full-batch and mini-batch local computation models. Importantly, the proposed algorithms are communication efficient, in that the communication effort can be reduced when the level of heterogeneity among the local data also reduces. In the extreme case where the local data becomes homogeneous, only O(1) communication is required among the agents. To the best of our knowledge, this is the first algorithmic framework for FL that achieves all the above properties.
Original language | English (US) |
---|---|
Pages (from-to) | 6055-6070 |
Number of pages | 16 |
Journal | IEEE Transactions on Signal Processing |
Volume | 69 |
DOIs | |
State | Published - Jan 1 2021 |
Bibliographical note
Publisher Copyright:© 2021 Institute of Electrical and Electronics Engineers Inc.. All rights reserved.
Keywords
- Convergence analysis
- Data heterogeneity
- Distributed algorithms
- Federated learning
- Machine learning algorithms