Privacy-preserving federated learning: algorithms and guarantees

Xinwei Zhang, Xiangyi Chen, Bingqing Song, Prashant Khanduri, Mingyi Hong

Research output: Chapter in Book/Report/Conference proceedingChapter

Abstract

In federated learning (FL), multiple clients coordinate to learn a global model using locally held private data. The clients' private data may contain sensitive user information that needs to be protected during the FL training. Differential privacy (DP) is a useful mechanism that provides quantifiable privacy guarantees while training a machine learning (ML) model. In this chapter, we discuss two popular and useful notions of DP that often arise in the context of FL: sample-level DP and client-level DP, where the former provides privacy protection for each data sample, while the latter protects each client's identity. We further discuss differentially private FL algorithms and the associated performance guarantees under different settings. Two common operations in developing differentially private algorithms for FL are: 1) adding perturbation and 2) clipping the gradients/models before transmission. A majority of research has been focused on analyzing the effect of different perturbation distributions on the performance of DP algorithms. However, the effect of clipping on the performance of FL algorithms has not been well understood. In the second part of this chapter, we (empirically and theoretically) discuss the effect of clipping on the performance of an FL algorithm. Specifically, we demonstrate the effect of clipping bias on the utility/privacy trade-off of the FL algorithms. Finally, we present some open problems and various avenues for future research.

Original languageEnglish (US)
Title of host publicationFederated Learning
Subtitle of host publicationTheory and Practice
PublisherElsevier
Pages57-74
Number of pages18
ISBN (Electronic)9780443190377
ISBN (Print)9780443190384
DOIs
StatePublished - Jan 1 2024

Bibliographical note

Publisher Copyright:
© 2024 Elsevier Inc. All rights reserved.

Keywords

  • Differential privacy (DP)
  • Federated learning (FL)
  • Model clipping
  • Sample and client level differential privacy
  • Utility-privacy trade-off

Fingerprint

Dive into the research topics of 'Privacy-preserving federated learning: algorithms and guarantees'. Together they form a unique fingerprint.

Cite this