Abstract
Despite their unprecedented performance in various domains, utilization of Deep Neural Networks (DNNs) in safety-critical environments is severely limited in the presence of even small adversarial perturbations. The present work develops a randomized approach to detecting such perturbations based on minimum uncertainty metrics that rely on sampling at the hidden layers during the DNN inference stage. Inspired by Bayesian approaches to uncertainty estimation, the sampling probabilities are designed for effective detection of the adversarially corrupted inputs. Being modular, the novel detector of adversaries can be conveniently employed by any pre-trained DNN at no extra training overhead. Selecting which units to sample per hidden layer entails quantifying the amount of DNN output uncertainty, where the overall uncertainty is expressed in terms of its layer-wise components-what also promotes scalability. Sampling probabilities are then sought by minimizing uncertainty measures layer-by-layer, leading to a novel convex optimization problem that admits an exact solver with superlinear convergence rate. By simplifying the objective function, low-complexity approximate solvers are also developed. In addition to valuable insights, these approximations link the novel approach with state-of-the-art randomized adversarial detectors. The effectiveness of the novel detectors in the context of competing alternatives is highlighted through extensive tests for various types of adversarial attacks with variable levels of strength.
Original language | English (US) |
---|---|
Title of host publication | 2020 Information Theory and Applications Workshop, ITA 2020 |
Publisher | Institute of Electrical and Electronics Engineers Inc. |
ISBN (Electronic) | 9781728141909 |
DOIs | |
State | Published - Feb 2 2020 |
Event | 2020 Information Theory and Applications Workshop, ITA 2020 - San Diego, United States Duration: Feb 2 2020 → Feb 7 2020 |
Publication series
Name | 2020 Information Theory and Applications Workshop, ITA 2020 |
---|
Conference
Conference | 2020 Information Theory and Applications Workshop, ITA 2020 |
---|---|
Country/Territory | United States |
City | San Diego |
Period | 2/2/20 → 2/7/20 |
Bibliographical note
Funding Information:Part of this work was done during a summer research internship at Technicolor AI Lab in Palo Alto, CA-USA. This research was supported in part by NSF grant 151405n6, 1505970, 1901134, and 1711471.
Funding Information:
• Part of this work was done during a summer research internship at Technicolor AI Lab in Palo Alto, CA - USA. This research was supported in part by NSF grant 151405\6, 1505970, 1901134, and 1711471. Author emails: sheik081@umn.edu, swayambhoo.jain@gmail.com, georgios@umn.edu
Publisher Copyright:
© 2020 IEEE.
Copyright:
Copyright 2020 Elsevier B.V., All rights reserved.
Keywords
- Adversarial input
- Bayesian neural networks
- attack detection
- uncertainty estimation