Adversarial atacks on an oblivious recommender

Konstantina Christakopoulou, Arindam Banerjee

Research output: Chapter in Book/Report/Conference proceedingConference contribution

Abstract

Can machine learning models be easily fooled? Despite the recent surge of interest in learned adversarial attacks in other domains, in the context of recommendation systems this question has mainly been answered using hand-engineered fake user profles. This paper attempts to reduce this gap. We provide a formulation for learning to attack a recommender as a repeated general-sum game between two players, i.e., an adversary and a recommender oblivious to the adversary's existence. We consider the challenging case of poisoning attacks, which focus on the training phase of the recommender model. We generate adversarial user profles targeting subsets of users or items, or generally the top-K recommendation quality. Moreover, we ensure that the adversarial user profles remain unnoticeable by preserving proximity of the real user rating/interaction distribution to the adversarial fake user distribution. To cope with the challenge of the adversary not having access to the gradient of the recommender's objective with respect to the fake user profles, we provide a non-trivial algorithm building upon zero-order optimization techniques. We ofer a wide range of experiments, instantiating the proposed method for the case of the classic popular approach of a low-rank recommender, and illustrating the extent of the recommender's vulnerability to a variety of adversarial intents. These results can serve as a motivating point for more research into recommender defense strategies against machine learned attacks.

Original languageEnglish (US)
Title of host publicationRecSys 2019 - 13th ACM Conference on Recommender Systems
PublisherAssociation for Computing Machinery, Inc
Pages322-330
Number of pages9
ISBN (Electronic)9781450362436
DOIs
StatePublished - Sep 10 2019
Event13th ACM Conference on Recommender Systems, RecSys 2019 - Copenhagen, Denmark
Duration: Sep 16 2019Sep 20 2019

Publication series

NameRecSys 2019 - 13th ACM Conference on Recommender Systems

Conference

Conference13th ACM Conference on Recommender Systems, RecSys 2019
CountryDenmark
CityCopenhagen
Period9/16/199/20/19

Fingerprint

Recommender systems
Learning systems
Experiments

Keywords

  • Learned Adversarial Attacks
  • Recommender Systems

Cite this

Christakopoulou, K., & Banerjee, A. (2019). Adversarial atacks on an oblivious recommender. In RecSys 2019 - 13th ACM Conference on Recommender Systems (pp. 322-330). (RecSys 2019 - 13th ACM Conference on Recommender Systems). Association for Computing Machinery, Inc. https://doi.org/10.1145/3298689.3347031

Adversarial atacks on an oblivious recommender. / Christakopoulou, Konstantina; Banerjee, Arindam.

RecSys 2019 - 13th ACM Conference on Recommender Systems. Association for Computing Machinery, Inc, 2019. p. 322-330 (RecSys 2019 - 13th ACM Conference on Recommender Systems).

Research output: Chapter in Book/Report/Conference proceedingConference contribution

Christakopoulou, K & Banerjee, A 2019, Adversarial atacks on an oblivious recommender. in RecSys 2019 - 13th ACM Conference on Recommender Systems. RecSys 2019 - 13th ACM Conference on Recommender Systems, Association for Computing Machinery, Inc, pp. 322-330, 13th ACM Conference on Recommender Systems, RecSys 2019, Copenhagen, Denmark, 9/16/19. https://doi.org/10.1145/3298689.3347031
Christakopoulou K, Banerjee A. Adversarial atacks on an oblivious recommender. In RecSys 2019 - 13th ACM Conference on Recommender Systems. Association for Computing Machinery, Inc. 2019. p. 322-330. (RecSys 2019 - 13th ACM Conference on Recommender Systems). https://doi.org/10.1145/3298689.3347031
Christakopoulou, Konstantina ; Banerjee, Arindam. / Adversarial atacks on an oblivious recommender. RecSys 2019 - 13th ACM Conference on Recommender Systems. Association for Computing Machinery, Inc, 2019. pp. 322-330 (RecSys 2019 - 13th ACM Conference on Recommender Systems).
@inproceedings{3c301dc9aaaa40eb8bfc53bf58e7e58c,
title = "Adversarial atacks on an oblivious recommender",
abstract = "Can machine learning models be easily fooled? Despite the recent surge of interest in learned adversarial attacks in other domains, in the context of recommendation systems this question has mainly been answered using hand-engineered fake user profles. This paper attempts to reduce this gap. We provide a formulation for learning to attack a recommender as a repeated general-sum game between two players, i.e., an adversary and a recommender oblivious to the adversary's existence. We consider the challenging case of poisoning attacks, which focus on the training phase of the recommender model. We generate adversarial user profles targeting subsets of users or items, or generally the top-K recommendation quality. Moreover, we ensure that the adversarial user profles remain unnoticeable by preserving proximity of the real user rating/interaction distribution to the adversarial fake user distribution. To cope with the challenge of the adversary not having access to the gradient of the recommender's objective with respect to the fake user profles, we provide a non-trivial algorithm building upon zero-order optimization techniques. We ofer a wide range of experiments, instantiating the proposed method for the case of the classic popular approach of a low-rank recommender, and illustrating the extent of the recommender's vulnerability to a variety of adversarial intents. These results can serve as a motivating point for more research into recommender defense strategies against machine learned attacks.",
keywords = "Learned Adversarial Attacks, Recommender Systems",
author = "Konstantina Christakopoulou and Arindam Banerjee",
year = "2019",
month = "9",
day = "10",
doi = "10.1145/3298689.3347031",
language = "English (US)",
series = "RecSys 2019 - 13th ACM Conference on Recommender Systems",
publisher = "Association for Computing Machinery, Inc",
pages = "322--330",
booktitle = "RecSys 2019 - 13th ACM Conference on Recommender Systems",

}

TY - GEN

T1 - Adversarial atacks on an oblivious recommender

AU - Christakopoulou, Konstantina

AU - Banerjee, Arindam

PY - 2019/9/10

Y1 - 2019/9/10

N2 - Can machine learning models be easily fooled? Despite the recent surge of interest in learned adversarial attacks in other domains, in the context of recommendation systems this question has mainly been answered using hand-engineered fake user profles. This paper attempts to reduce this gap. We provide a formulation for learning to attack a recommender as a repeated general-sum game between two players, i.e., an adversary and a recommender oblivious to the adversary's existence. We consider the challenging case of poisoning attacks, which focus on the training phase of the recommender model. We generate adversarial user profles targeting subsets of users or items, or generally the top-K recommendation quality. Moreover, we ensure that the adversarial user profles remain unnoticeable by preserving proximity of the real user rating/interaction distribution to the adversarial fake user distribution. To cope with the challenge of the adversary not having access to the gradient of the recommender's objective with respect to the fake user profles, we provide a non-trivial algorithm building upon zero-order optimization techniques. We ofer a wide range of experiments, instantiating the proposed method for the case of the classic popular approach of a low-rank recommender, and illustrating the extent of the recommender's vulnerability to a variety of adversarial intents. These results can serve as a motivating point for more research into recommender defense strategies against machine learned attacks.

AB - Can machine learning models be easily fooled? Despite the recent surge of interest in learned adversarial attacks in other domains, in the context of recommendation systems this question has mainly been answered using hand-engineered fake user profles. This paper attempts to reduce this gap. We provide a formulation for learning to attack a recommender as a repeated general-sum game between two players, i.e., an adversary and a recommender oblivious to the adversary's existence. We consider the challenging case of poisoning attacks, which focus on the training phase of the recommender model. We generate adversarial user profles targeting subsets of users or items, or generally the top-K recommendation quality. Moreover, we ensure that the adversarial user profles remain unnoticeable by preserving proximity of the real user rating/interaction distribution to the adversarial fake user distribution. To cope with the challenge of the adversary not having access to the gradient of the recommender's objective with respect to the fake user profles, we provide a non-trivial algorithm building upon zero-order optimization techniques. We ofer a wide range of experiments, instantiating the proposed method for the case of the classic popular approach of a low-rank recommender, and illustrating the extent of the recommender's vulnerability to a variety of adversarial intents. These results can serve as a motivating point for more research into recommender defense strategies against machine learned attacks.

KW - Learned Adversarial Attacks

KW - Recommender Systems

UR - http://www.scopus.com/inward/record.url?scp=85073369280&partnerID=8YFLogxK

UR - http://www.scopus.com/inward/citedby.url?scp=85073369280&partnerID=8YFLogxK

U2 - 10.1145/3298689.3347031

DO - 10.1145/3298689.3347031

M3 - Conference contribution

AN - SCOPUS:85073369280

T3 - RecSys 2019 - 13th ACM Conference on Recommender Systems

SP - 322

EP - 330

BT - RecSys 2019 - 13th ACM Conference on Recommender Systems

PB - Association for Computing Machinery, Inc

ER -