SC-DCNN: Highly-scalable deep convolutional neural network using stochastic computing

Ao Ren, Zhe Li, Caiwen Ding, Qinru Qiu, Yanzhi Wang, Ji Li, Xuehai Qian, Bo Yuan

Research output: Chapter in Book/Report/Conference proceedingConference contribution

79 Scopus citations

Abstract

With the recent advance of wearable devices and Internet of Things (IoTs), it becomes attractive to implement the Deep Convolutional Neural Networks (DCNNs) in embedded and portable systems. Currently, executing the software-based DCNNs requires high-performance servers, restricting the widespread deployment on embedded and mobile IoT devices. To overcome this obstacle, considerable research efforts have been made to develop highly-parallel and specialized DCNN accelerators using GPGPUs, FPGAs or ASICs. Stochastic Computing (SC), which uses a bit-stream to represent a number within [-1, 1] by counting the number of ones in the bit-stream, has high potential for implementing DCNNs with high scalability and ultra-low hardware footprint. Since multiplications and additions can be calculated using AND gates and multiplexers in SC, significant reductions in power (energy) and hardware footprint can be achieved compared to the conventional binary arithmetic implementations. The tremendous savings in power (energy) and hardware resources allow immense design space for enhancing scalability and robustness for hardware DCNNs. This paper presents SC-DCNN, the first comprehensive design and optimization framework of SC-based DCNNs, using a bottom-up approach. We first present the designs of function blocks that perform the basic operations in DCNN, including inner product, pooling, and activation function. Then we propose four designs of feature extraction blocks, which are in charge of extracting features from input feature maps, by connecting different basic function blocks with joint optimization. Moreover, the efficient weight storage methods are proposed to reduce the area and power (energy) consumption. Putting all together, with feature extraction blocks carefully selected, SC-DCNN is holistically optimized to minimize area and power (energy) consumption while maintaining high network accuracy. Experimental results demonstrate that the LeNet5 implemented in SCDCNN consumes only 17 mm2 area and 1.53 W power, achieves throughput of 781250 images/s, area efficiency of 45946 images/s/mm2, and energy efficiency of 510734 images/J.

Original languageEnglish (US)
Title of host publicationASPLOS 2017 - 22nd International Conference on Architectural Support for Programming Languages and Operating Systems
PublisherAssociation for Computing Machinery
Pages405-418
Number of pages14
ISBN (Electronic)9781450344654
DOIs
StatePublished - Apr 4 2017
Externally publishedYes
Event22nd International Conference on Architectural Support for Programming Languages and Operating Systems, ASPLOS 2017 - Xi'an, China
Duration: Apr 8 2017Apr 12 2017

Publication series

NameInternational Conference on Architectural Support for Programming Languages and Operating Systems - ASPLOS
VolumePart F127193

Other

Other22nd International Conference on Architectural Support for Programming Languages and Operating Systems, ASPLOS 2017
Country/TerritoryChina
CityXi'an
Period4/8/174/12/17

Bibliographical note

Publisher Copyright:
© 2017 ACM.

Fingerprint

Dive into the research topics of 'SC-DCNN: Highly-scalable deep convolutional neural network using stochastic computing'. Together they form a unique fingerprint.

Cite this