Abstract
Federated Learning (FL) is a privacy-preserving distributed machine learning technique that enables individual clients (e.g., user participants, edge devices, or organizations) to train a model on their local data in a secure environment and then share the trained model with an aggregator to build a global model collaboratively. In this work, we propose FedDefender, a defense mechanism against targeted poisoning attacks in FL by leveraging differential testing. FedDefender first applies differential testing on clients' models using a synthetic input. Instead of comparing the output (predicted label), which is unavailable for synthetic input, FedDefender fingerprints the neuron activations of clients' models to identify a potentially malicious client containing a backdoor. We evaluate FedDefender using MNIST and FashionMNIST datasets with 20 and 30 clients, and our results demonstrate that FedDefender effectively mitigates such attacks, reducing the attack success rate (ASR) to 10% without deteriorating the global model performance.
| Original language | English (US) |
|---|---|
| Title of host publication | SE4SafeML 2023 - Proceedings of the 1st International Workshop on Dependability and Trustworthiness of Safety-Critical Systems with Machine Learned Components, Co-located with |
| Subtitle of host publication | ESEC/FSE 2023 |
| Editors | Marsha Chechik, Sebastian Elbaum, Boyue Caroline Hu, Lina Marsso, Meriel von Stein |
| Publisher | Association for Computing Machinery, Inc |
| Pages | 6-9 |
| Number of pages | 4 |
| ISBN (Electronic) | 9798400703799 |
| DOIs | |
| State | Published - Dec 4 2023 |
| Event | 1st International Workshop on Dependability and Trustworthiness of Safety-Critical Systems with Machine Learned Components, SE4SafeML 2023. Co-located with: ESEC/FSE 2023 - San Francisco, United States Duration: Dec 4 2023 → … |
Publication series
| Name | SE4SafeML 2023 - Proceedings of the 1st International Workshop on Dependability and Trustworthiness of Safety-Critical Systems with Machine Learned Components, Co-located with: ESEC/FSE 2023 |
|---|
Conference
| Conference | 1st International Workshop on Dependability and Trustworthiness of Safety-Critical Systems with Machine Learned Components, SE4SafeML 2023. Co-located with: ESEC/FSE 2023 |
|---|---|
| Country/Territory | United States |
| City | San Francisco |
| Period | 12/4/23 → … |
Bibliographical note
Publisher Copyright:© 2023 Owner/Author.
Keywords
- backdoor attack
- deep learning
- differential testing
- fault localization
- federated learning
- poisoning attack
- testing