Efficient Randomized Defense against Adversarial Attacks in Deep Convolutional Neural Networks

Fatemeh Sheikholeslami, Swayambhoo Jain, Georgios B Giannakis

Research output: Chapter in Book/Report/Conference proceedingConference contribution

Abstract

Despite their well-documented learning capabilities in clean environments, deep convolutional neural networks (CNNs) are extremely fragile in adversarial settings, where carefully crafted perturbations created by an attacker can easily disrupt the task at hand. Numerous methods have been proposed for designing effective attacks, while the design of effective defense schemes is still an open area. This work leverages randomization-based defense schemes to introduce a sampling mechanism for strong and efficient defense. To this end, sampling is proposed to take place over the matricized mid-layer data in the neural network, and the sampling probabilities are systematically obtained via variance minimization. The proposed defense only requires adding sampling blocks to the network in the inference phase without extra overhead in the training. In addition, it can be utilized on any pre-trained network without altering the weights. Numerical tests corroborate the improved defense against various attack schemes in comparison with state-of-the-art randomized defenses.

Original languageEnglish (US)
Title of host publication2019 IEEE International Conference on Acoustics, Speech, and Signal Processing, ICASSP 2019 - Proceedings
PublisherInstitute of Electrical and Electronics Engineers Inc.
Pages3277-3281
Number of pages5
ISBN (Electronic)9781479981311
DOIs
StatePublished - May 2019
Event44th IEEE International Conference on Acoustics, Speech, and Signal Processing, ICASSP 2019 - Brighton, United Kingdom
Duration: May 12 2019May 17 2019

Publication series

NameICASSP, IEEE International Conference on Acoustics, Speech and Signal Processing - Proceedings
Volume2019-May
ISSN (Print)1520-6149

Conference

Conference44th IEEE International Conference on Acoustics, Speech, and Signal Processing, ICASSP 2019
CountryUnited Kingdom
CityBrighton
Period5/12/195/17/19

Fingerprint

Sampling
Neural networks

Keywords

  • Deep learning
  • adversarial examples
  • convolutional neural networks
  • image classification
  • randomized defenses

Cite this

Sheikholeslami, F., Jain, S., & Giannakis, G. B. (2019). Efficient Randomized Defense against Adversarial Attacks in Deep Convolutional Neural Networks. In 2019 IEEE International Conference on Acoustics, Speech, and Signal Processing, ICASSP 2019 - Proceedings (pp. 3277-3281). [8683348] (ICASSP, IEEE International Conference on Acoustics, Speech and Signal Processing - Proceedings; Vol. 2019-May). Institute of Electrical and Electronics Engineers Inc.. https://doi.org/10.1109/ICASSP.2019.8683348

Efficient Randomized Defense against Adversarial Attacks in Deep Convolutional Neural Networks. / Sheikholeslami, Fatemeh; Jain, Swayambhoo; Giannakis, Georgios B.

2019 IEEE International Conference on Acoustics, Speech, and Signal Processing, ICASSP 2019 - Proceedings. Institute of Electrical and Electronics Engineers Inc., 2019. p. 3277-3281 8683348 (ICASSP, IEEE International Conference on Acoustics, Speech and Signal Processing - Proceedings; Vol. 2019-May).

Research output: Chapter in Book/Report/Conference proceedingConference contribution

Sheikholeslami, F, Jain, S & Giannakis, GB 2019, Efficient Randomized Defense against Adversarial Attacks in Deep Convolutional Neural Networks. in 2019 IEEE International Conference on Acoustics, Speech, and Signal Processing, ICASSP 2019 - Proceedings., 8683348, ICASSP, IEEE International Conference on Acoustics, Speech and Signal Processing - Proceedings, vol. 2019-May, Institute of Electrical and Electronics Engineers Inc., pp. 3277-3281, 44th IEEE International Conference on Acoustics, Speech, and Signal Processing, ICASSP 2019, Brighton, United Kingdom, 5/12/19. https://doi.org/10.1109/ICASSP.2019.8683348
Sheikholeslami F, Jain S, Giannakis GB. Efficient Randomized Defense against Adversarial Attacks in Deep Convolutional Neural Networks. In 2019 IEEE International Conference on Acoustics, Speech, and Signal Processing, ICASSP 2019 - Proceedings. Institute of Electrical and Electronics Engineers Inc. 2019. p. 3277-3281. 8683348. (ICASSP, IEEE International Conference on Acoustics, Speech and Signal Processing - Proceedings). https://doi.org/10.1109/ICASSP.2019.8683348
Sheikholeslami, Fatemeh ; Jain, Swayambhoo ; Giannakis, Georgios B. / Efficient Randomized Defense against Adversarial Attacks in Deep Convolutional Neural Networks. 2019 IEEE International Conference on Acoustics, Speech, and Signal Processing, ICASSP 2019 - Proceedings. Institute of Electrical and Electronics Engineers Inc., 2019. pp. 3277-3281 (ICASSP, IEEE International Conference on Acoustics, Speech and Signal Processing - Proceedings).
@inproceedings{1ea098e9ed974e30b8e3bb6542ec5378,
title = "Efficient Randomized Defense against Adversarial Attacks in Deep Convolutional Neural Networks",
abstract = "Despite their well-documented learning capabilities in clean environments, deep convolutional neural networks (CNNs) are extremely fragile in adversarial settings, where carefully crafted perturbations created by an attacker can easily disrupt the task at hand. Numerous methods have been proposed for designing effective attacks, while the design of effective defense schemes is still an open area. This work leverages randomization-based defense schemes to introduce a sampling mechanism for strong and efficient defense. To this end, sampling is proposed to take place over the matricized mid-layer data in the neural network, and the sampling probabilities are systematically obtained via variance minimization. The proposed defense only requires adding sampling blocks to the network in the inference phase without extra overhead in the training. In addition, it can be utilized on any pre-trained network without altering the weights. Numerical tests corroborate the improved defense against various attack schemes in comparison with state-of-the-art randomized defenses.",
keywords = "Deep learning, adversarial examples, convolutional neural networks, image classification, randomized defenses",
author = "Fatemeh Sheikholeslami and Swayambhoo Jain and Giannakis, {Georgios B}",
year = "2019",
month = "5",
doi = "10.1109/ICASSP.2019.8683348",
language = "English (US)",
series = "ICASSP, IEEE International Conference on Acoustics, Speech and Signal Processing - Proceedings",
publisher = "Institute of Electrical and Electronics Engineers Inc.",
pages = "3277--3281",
booktitle = "2019 IEEE International Conference on Acoustics, Speech, and Signal Processing, ICASSP 2019 - Proceedings",

}

TY - GEN

T1 - Efficient Randomized Defense against Adversarial Attacks in Deep Convolutional Neural Networks

AU - Sheikholeslami, Fatemeh

AU - Jain, Swayambhoo

AU - Giannakis, Georgios B

PY - 2019/5

Y1 - 2019/5

N2 - Despite their well-documented learning capabilities in clean environments, deep convolutional neural networks (CNNs) are extremely fragile in adversarial settings, where carefully crafted perturbations created by an attacker can easily disrupt the task at hand. Numerous methods have been proposed for designing effective attacks, while the design of effective defense schemes is still an open area. This work leverages randomization-based defense schemes to introduce a sampling mechanism for strong and efficient defense. To this end, sampling is proposed to take place over the matricized mid-layer data in the neural network, and the sampling probabilities are systematically obtained via variance minimization. The proposed defense only requires adding sampling blocks to the network in the inference phase without extra overhead in the training. In addition, it can be utilized on any pre-trained network without altering the weights. Numerical tests corroborate the improved defense against various attack schemes in comparison with state-of-the-art randomized defenses.

AB - Despite their well-documented learning capabilities in clean environments, deep convolutional neural networks (CNNs) are extremely fragile in adversarial settings, where carefully crafted perturbations created by an attacker can easily disrupt the task at hand. Numerous methods have been proposed for designing effective attacks, while the design of effective defense schemes is still an open area. This work leverages randomization-based defense schemes to introduce a sampling mechanism for strong and efficient defense. To this end, sampling is proposed to take place over the matricized mid-layer data in the neural network, and the sampling probabilities are systematically obtained via variance minimization. The proposed defense only requires adding sampling blocks to the network in the inference phase without extra overhead in the training. In addition, it can be utilized on any pre-trained network without altering the weights. Numerical tests corroborate the improved defense against various attack schemes in comparison with state-of-the-art randomized defenses.

KW - Deep learning

KW - adversarial examples

KW - convolutional neural networks

KW - image classification

KW - randomized defenses

UR - http://www.scopus.com/inward/record.url?scp=85069004390&partnerID=8YFLogxK

UR - http://www.scopus.com/inward/citedby.url?scp=85069004390&partnerID=8YFLogxK

U2 - 10.1109/ICASSP.2019.8683348

DO - 10.1109/ICASSP.2019.8683348

M3 - Conference contribution

AN - SCOPUS:85069004390

T3 - ICASSP, IEEE International Conference on Acoustics, Speech and Signal Processing - Proceedings

SP - 3277

EP - 3281

BT - 2019 IEEE International Conference on Acoustics, Speech, and Signal Processing, ICASSP 2019 - Proceedings

PB - Institute of Electrical and Electronics Engineers Inc.

ER -