Practical Attacks on Deep Neural Networks by Memory Trojaning

Xing Hu, Yang Zhao, Lei Deng, Ling Liang, Pengfei Zuo, Jing Ye, Yingyan Lin, Yuan Xie

Research output: Contribution to journalArticlepeer-review

17 Scopus citations

Abstract

Deep neural network (DNN) accelerators are widely deployed in computer vision, speech recognition, and machine translation applications, in which attacks on DNNs have become a growing concern. This article focuses on exploring the implications of hardware Trojan attacks on DNNs. Trojans are one of the most challenging threat models in hardware security where adversaries insert malicious modifications to the original integrated circuits (ICs), leading to malfunction once being triggered. Such attacks can be conducted by adversaries because modern ICs commonly include third-party intellectual property (IP) blocks. Previous studies design hardware Trojans to attack DNNs with the assumption that adversaries have full knowledge or manipulation of the DNN systems' victim model and toolchain in addition to the hardware platforms, yet such a threat model is strict, limiting their practical adoption. In this article, we propose a memory Trojan methodology that implants the malicious logics merely into the memory controllers of DNN systems without the necessity of toolchain manipulation or accessing to the victim model and thus is feasible for practical uses. Specifically, we locate the input image data among the massive volume of memory traffics based on memory access patterns and propose a Trojan trigger mechanism based on detecting the geometric feature in input images. Extensive experiments show that the proposed trigger mechanism is effective even in the presence of environmental noises and preprocessing operations. Furthermore, we design and implement the payload and verify that the proposed Trojan technique can effectively conduct both untargeted and targeted attacks on DNNs.

Original languageEnglish (US)
Article number9096397
Pages (from-to)1230-1243
Number of pages14
JournalIEEE Transactions on Computer-Aided Design of Integrated Circuits and Systems
Volume40
Issue number6
DOIs
StatePublished - Jun 2021
Externally publishedYes

Bibliographical note

Publisher Copyright:
© 1982-2012 IEEE.

Keywords

  • Convolutional neural networks (CNNs)
  • deep learning accelerator
  • deep learning attack
  • hardware Trojan

Fingerprint

Dive into the research topics of 'Practical Attacks on Deep Neural Networks by Memory Trojaning'. Together they form a unique fingerprint.

Cite this