Optimizing loop operation and dataflow in FPGA acceleration of deep convolutional neural networks

Yufei Ma, Yu Cao, Sarma Vrudhula, Jae Sun Seo

Research output: Chapter in Book/Report/Conference proceedingConference contribution

335 Scopus citations

Abstract

As convolution layers contribute most operations in convolutional neural network (CNN) algorithms, an effective convolution acceleration scheme significantly affects the efficiency and performance of a hardware CNN accelerator. Convolution in CNNs involves three-dimensional multiply and accumulate (MAC) operations with four levels of loops, which results in a large design space. Prior works either employ limited loop optimization techniques, e.g. loop unrolling, tiling and interchange, or only tune some of the design variables after the accelerator architecture and dataflow are already fixed. Without fully studying the convolution loop optimization before the hardware design phase, the resulting accelerator can hardly exploit the data reuse and manage data movement efficiently. This work overcomes these barriers by quantitatively analyzing and optimizing the design objectives (e.g. required memory access) of the CNN accelerator based on multiple design variables. We systematically explore the trade-offs of hardware cost by searching the design variable configurations, and propose a specific dataflow of hardware CNN acceleration to minimize the memory access and data movement while maximizing the resource utilization to achieve high performance. The proposed CNN acceleration scheme and architecture are demonstrated on a standalone Altera Arria 10 GX 1150 FPGA by implementing end-to-end VGG-16 CNN model and achieved 645.25 GOPS of throughput and 47.97 ms of latency, which is a >3.2x enhancement compared to state-of-the-art FPGA implementations of VGG model.

Original languageEnglish (US)
Title of host publicationFPGA 2017 - Proceedings of the 2017 ACM/SIGDA International Symposium on Field-Programmable Gate Arrays
PublisherAssociation for Computing Machinery, Inc
Pages45-54
Number of pages10
ISBN (Electronic)9781450343541
DOIs
StatePublished - Feb 22 2017
Externally publishedYes
Event2017 ACM/SIGDA International Symposium on Field-Programmable Gate Arrays, FPGA 2017 - Monterey, United States
Duration: Feb 22 2017Feb 24 2017

Publication series

NameFPGA 2017 - Proceedings of the 2017 ACM/SIGDA International Symposium on Field-Programmable Gate Arrays

Conference

Conference2017 ACM/SIGDA International Symposium on Field-Programmable Gate Arrays, FPGA 2017
Country/TerritoryUnited States
CityMonterey
Period2/22/172/24/17

Bibliographical note

Publisher Copyright:
© 2017 ACM.

Keywords

  • Convolutional neural networks
  • FPGA
  • Hardware acceleration

Fingerprint

Dive into the research topics of 'Optimizing loop operation and dataflow in FPGA acceleration of deep convolutional neural networks'. Together they form a unique fingerprint.

Cite this