TIFF: Tokenized Incentive for Federated Learning

Jingoo Han, Ahmad Faraz Khan, Syed Zawad, Ali Anwar, Nathalie Baracaldo Angel, Yi Zhou, Feng Yan, Ali R. Butt

Research output: Chapter in Book/Report/Conference proceedingConference contribution

8 Scopus citations

Abstract

In federated learning (FL), clients collectively train a global machine learning model with their own local data. Without sharing sensitive raw data, each client in FL only sends updated weights to consider privacy and security concerns. Most of existing FL works focus mainly on improving model accuracy and training time, but only a few works focus on FL incentive mechanisms. To build a high performance model after FL training, clients need to provide high quality and large amounts of data. However, in real FL scenarios, high-quality clients are reluctant to participate in FL process without reasonable compensation, because clients are self-interested and other clients can be business competitors. Even participation incurs some cost for contributing to the FL model with their local dataset. To address this problem, we propose TIFF, a novel tokenized incentive mechanism, where tokens are used as a means of paying for the services of providing participants and the training infrastructure. Without payment delays, participation can be monetized as both providers and consumers, which promotes continued long-term participation of high-quality data parties. Additionally, paid tokens are reimbursed to each client as consumers according to our newly proposed metrics (such as token reduction ratio and utility improvement ratio), which keeps clients engaged in FL process as consumers. To measure data quality, accuracy is calculated in training without additional overheads. We leverage historical accuracy records and random exploration to select high-utility participants and to prevent overfitting. Results show that TIFF provides more tokens to normal providers by up to 6.9% and less tokens to malicious providers by up to 18.1%, achieving improvement of the final model accuracy by up to 7.4%, compared to the default approach.

Original languageEnglish (US)
Title of host publicationProceedings - 2022 IEEE 15th International Conference on Cloud Computing, CLOUD 2022
EditorsClaudio Agostino Ardagna, Nimanthi Atukorala, Rajkumar Buyya, Carl K. Chang, Rong N. Chang, Ernesto Damiani, Gargi Banerjee Dasgupta, Fabrizio Gagliardi, Christoph Hagleitner, Dejan Milojicic, Tuan M Hoang Trong, Robert Ward, Fatos Xhafa, Jia Zhang
PublisherIEEE Computer Society
Pages407-416
Number of pages10
ISBN (Electronic)9781665481373
DOIs
StatePublished - 2022
Event15th IEEE International Conference on Cloud Computing, CLOUD 2022 - Barcelona, Spain
Duration: Jul 10 2021Jul 16 2021

Publication series

NameIEEE International Conference on Cloud Computing, CLOUD
Volume2022-July
ISSN (Print)2159-6182
ISSN (Electronic)2159-6190

Conference

Conference15th IEEE International Conference on Cloud Computing, CLOUD 2022
Country/TerritorySpain
CityBarcelona
Period7/10/217/16/21

Bibliographical note

Publisher Copyright:
© 2022 IEEE.

Keywords

  • Distributed deep learning
  • Federated learning
  • Incentive mechanism
  • Privacy-aware machine learning
  • Tokenized incentivization

Fingerprint

Dive into the research topics of 'TIFF: Tokenized Incentive for Federated Learning'. Together they form a unique fingerprint.

Cite this