FedDefender: Backdoor Attack Defense in Federated Learning

Waris Gill, Ali Anwar, Muhammad Ali Gulzar

Research output: Chapter in Book/Report/Conference proceedingConference contribution

1 Scopus citations

Abstract

Federated Learning (FL) is a privacy-preserving distributed machine learning technique that enables individual clients (e.g., user participants, edge devices, or organizations) to train a model on their local data in a secure environment and then share the trained model with an aggregator to build a global model collaboratively. In this work, we propose FedDefender, a defense mechanism against targeted poisoning attacks in FL by leveraging differential testing. FedDefender first applies differential testing on clients' models using a synthetic input. Instead of comparing the output (predicted label), which is unavailable for synthetic input, FedDefender fingerprints the neuron activations of clients' models to identify a potentially malicious client containing a backdoor. We evaluate FedDefender using MNIST and FashionMNIST datasets with 20 and 30 clients, and our results demonstrate that FedDefender effectively mitigates such attacks, reducing the attack success rate (ASR) to 10% without deteriorating the global model performance.

Original languageEnglish (US)
Title of host publicationSE4SafeML 2023 - Proceedings of the 1st International Workshop on Dependability and Trustworthiness of Safety-Critical Systems with Machine Learned Components, Co-located with
Subtitle of host publicationESEC/FSE 2023
EditorsMarsha Chechik, Sebastian Elbaum, Boyue Caroline Hu, Lina Marsso, Meriel von Stein
PublisherAssociation for Computing Machinery, Inc
Pages6-9
Number of pages4
ISBN (Electronic)9798400703799
DOIs
StatePublished - Dec 4 2023
Event1st International Workshop on Dependability and Trustworthiness of Safety-Critical Systems with Machine Learned Components, SE4SafeML 2023. Co-located with: ESEC/FSE 2023 - San Francisco, United States
Duration: Dec 4 2023 → …

Publication series

NameSE4SafeML 2023 - Proceedings of the 1st International Workshop on Dependability and Trustworthiness of Safety-Critical Systems with Machine Learned Components, Co-located with: ESEC/FSE 2023

Conference

Conference1st International Workshop on Dependability and Trustworthiness of Safety-Critical Systems with Machine Learned Components, SE4SafeML 2023. Co-located with: ESEC/FSE 2023
Country/TerritoryUnited States
CitySan Francisco
Period12/4/23 → …

Bibliographical note

Publisher Copyright:
© 2023 Owner/Author.

Keywords

  • backdoor attack
  • deep learning
  • differential testing
  • fault localization
  • federated learning
  • poisoning attack
  • testing

Fingerprint

Dive into the research topics of 'FedDefender: Backdoor Attack Defense in Federated Learning'. Together they form a unique fingerprint.

Cite this