Low-cost stochastic hybrid multiplier for qantized neural networks

Bingzhe Li, M. Hassan Najafi, David J. Lilja

Research output: Contribution to journalArticlepeer-review

7 Scopus citations

Abstract

With increased interests of neural networks, hardware implementations of neural networks have been investigated. Researchers pursue low hardware cost by using different technologies such as stochastic computing (SC) and quantization. More specifically, the quantization is able to reduce total number of trained weights and results in low hardware cost. SC aims to lower hardware costs substantially by using simple gates instead of complex arithmetic operations. However, the advantages of both quantization and SC in neural networks are not well investigated. In this article, we propose a new stochastic multiplier with simple CMOS transistors called the stochastic hybrid multiplier for quantized neural networks. The new design uses the characteristic of quantized weights and tremendously reduces the hardware cost of neural networks. Experimental results indicate that our stochastic design achieves about 7.7x energy reduction compared to its counterpart binary implementation while maintaining slightly higher recognition error rates than the binary implementation. Compared to previous stochastic neural network implementations, our work derives at least 4x, 9x, and 10x reduction in terms of area, power, and energy, respectively.

Original languageEnglish (US)
Article number18
JournalACM Journal on Emerging Technologies in Computing Systems
Volume15
Issue number2
DOIs
StatePublished - Mar 2019

Bibliographical note

Publisher Copyright:
© 2019 Association for Computing Machinery.

Copyright:
Copyright 2019 Elsevier B.V., All rights reserved.

Keywords

  • Low power design
  • Mutiplier
  • Quantized neural network
  • Stochastic computing

Fingerprint

Dive into the research topics of 'Low-cost stochastic hybrid multiplier for qantized neural networks'. Together they form a unique fingerprint.

Cite this