Abstract
In federated learning (FL), clients collectively train a global machine learning model with their own local data. Without sharing sensitive raw data, each client in FL only sends updated weights to consider privacy and security concerns. Most of existing FL works focus mainly on improving model accuracy and training time, but only a few works focus on FL incentive mechanisms. To build a high performance model after FL training, clients need to provide high quality and large amounts of data. However, in real FL scenarios, high-quality clients are reluctant to participate in FL process without reasonable compensation, because clients are self-interested and other clients can be business competitors. Even participation incurs some cost for contributing to the FL model with their local dataset. To address this problem, we propose TIFF, a novel tokenized incentive mechanism, where tokens are used as a means of paying for the services of providing participants and the training infrastructure. Without payment delays, participation can be monetized as both providers and consumers, which promotes continued long-term participation of high-quality data parties. Additionally, paid tokens are reimbursed to each client as consumers according to our newly proposed metrics (such as token reduction ratio and utility improvement ratio), which keeps clients engaged in FL process as consumers. To measure data quality, accuracy is calculated in training without additional overheads. We leverage historical accuracy records and random exploration to select high-utility participants and to prevent overfitting. Results show that TIFF provides more tokens to normal providers by up to 6.9% and less tokens to malicious providers by up to 18.1%, achieving improvement of the final model accuracy by up to 7.4%, compared to the default approach.
Original language | English (US) |
---|---|
Title of host publication | Proceedings - 2022 IEEE 15th International Conference on Cloud Computing, CLOUD 2022 |
Editors | Claudio Agostino Ardagna, Nimanthi Atukorala, Rajkumar Buyya, Carl K. Chang, Rong N. Chang, Ernesto Damiani, Gargi Banerjee Dasgupta, Fabrizio Gagliardi, Christoph Hagleitner, Dejan Milojicic, Tuan M Hoang Trong, Robert Ward, Fatos Xhafa, Jia Zhang |
Publisher | IEEE Computer Society |
Pages | 407-416 |
Number of pages | 10 |
ISBN (Electronic) | 9781665481373 |
DOIs | |
State | Published - 2022 |
Event | 15th IEEE International Conference on Cloud Computing, CLOUD 2022 - Barcelona, Spain Duration: Jul 10 2021 → Jul 16 2021 |
Publication series
Name | IEEE International Conference on Cloud Computing, CLOUD |
---|---|
Volume | 2022-July |
ISSN (Print) | 2159-6182 |
ISSN (Electronic) | 2159-6190 |
Conference
Conference | 15th IEEE International Conference on Cloud Computing, CLOUD 2022 |
---|---|
Country/Territory | Spain |
City | Barcelona |
Period | 7/10/21 → 7/16/21 |
Bibliographical note
Publisher Copyright:© 2022 IEEE.
Keywords
- Distributed deep learning
- Federated learning
- Incentive mechanism
- Privacy-aware machine learning
- Tokenized incentivization