Minimum Uncertainty Based Detection of Adversaries in Deep Neural Networks

Fatemeh Sheikholeslami, Swayambhoo Jain, Georgios B. Giannakis

Research output: Chapter in Book/Report/Conference proceedingConference contribution

Abstract

Despite their unprecedented performance in various domains, utilization of Deep Neural Networks (DNNs) in safety-critical environments is severely limited in the presence of even small adversarial perturbations. The present work develops a randomized approach to detecting such perturbations based on minimum uncertainty metrics that rely on sampling at the hidden layers during the DNN inference stage. Inspired by Bayesian approaches to uncertainty estimation, the sampling probabilities are designed for effective detection of the adversarially corrupted inputs. Being modular, the novel detector of adversaries can be conveniently employed by any pre-trained DNN at no extra training overhead. Selecting which units to sample per hidden layer entails quantifying the amount of DNN output uncertainty, where the overall uncertainty is expressed in terms of its layer-wise components-what also promotes scalability. Sampling probabilities are then sought by minimizing uncertainty measures layer-by-layer, leading to a novel convex optimization problem that admits an exact solver with superlinear convergence rate. By simplifying the objective function, low-complexity approximate solvers are also developed. In addition to valuable insights, these approximations link the novel approach with state-of-the-art randomized adversarial detectors. The effectiveness of the novel detectors in the context of competing alternatives is highlighted through extensive tests for various types of adversarial attacks with variable levels of strength.

Original languageEnglish (US)
Title of host publication2020 Information Theory and Applications Workshop, ITA 2020
PublisherInstitute of Electrical and Electronics Engineers Inc.
ISBN (Electronic)9781728141909
DOIs
StatePublished - Feb 2 2020
Externally publishedYes
Event2020 Information Theory and Applications Workshop, ITA 2020 - San Diego, United States
Duration: Feb 2 2020Feb 7 2020

Publication series

Name2020 Information Theory and Applications Workshop, ITA 2020

Conference

Conference2020 Information Theory and Applications Workshop, ITA 2020
CountryUnited States
CitySan Diego
Period2/2/202/7/20

Bibliographical note

Funding Information:
Part of this work was done during a summer research internship at Technicolor AI Lab in Palo Alto, CA-USA. This research was supported in part by NSF grant 151405n6, 1505970, 1901134, and 1711471.

Funding Information:
• Part of this work was done during a summer research internship at Technicolor AI Lab in Palo Alto, CA - USA. This research was supported in part by NSF grant 151405\6, 1505970, 1901134, and 1711471. Author emails: sheik081@umn.edu, swayambhoo.jain@gmail.com, georgios@umn.edu

Publisher Copyright:
© 2020 IEEE.

Copyright:
Copyright 2020 Elsevier B.V., All rights reserved.

Keywords

  • Adversarial input
  • Bayesian neural networks
  • attack detection
  • uncertainty estimation

Fingerprint Dive into the research topics of 'Minimum Uncertainty Based Detection of Adversaries in Deep Neural Networks'. Together they form a unique fingerprint.

Cite this