Neural network classifiers using a hardware-based approximate activation function with a hybrid stochastic multiplier

Bingzhe Li, Yaobin Qin, Bo Yuan, David J. Lilja

Research output: Contribution to journalArticle

1 Citation (Scopus)

Abstract

Neural networks are becoming prevalent in many areas, such as pattern recognition and medical diagnosis. Stochastic computing is one potential solution for neural networks implemented in low-power back-end devices such as solar-powered devices and Internet of Things (IoT) devices. In this article, we investigate a new architecture of stochastic neural networks with a hardware-oriented approximate activation function. The newly proposed approximate activation function can be hidden in the proposed architecture and thus reduce the whole hardware cost. Additionally, to further reduce the hardware cost of the stochastic implementation, a new hybrid stochastic multiplier is proposed. It contains OR gates and a binary parallel counter, which aims to reduce the number of inputs of the binary parallel counter. The experimental results indicate the newly proposed approximate architecture without hybrid stochastic multipliers achieves more than 25%, 60%, and 3x reduction compared to previous stochastic neural networks, and more than 30x, 30x, and 52% reduction compared to conventional binary neural networks, in terms of area, power, and energy, respectively, while maintaining the similar error rates compared to the conventional neural networks. Furthermore, the stochastic implementation with hybrid stochastic multipliers further reduces area about 18% to 80%, power from 15% to 113.1%, and energy about 15% to 131%, respectively.

Original languageEnglish (US)
Article number12
JournalACM Journal on Emerging Technologies in Computing Systems
Volume15
Issue number1
DOIs
StatePublished - Jan 2019

Fingerprint

Classifiers
Chemical activation
Neural networks
Hardware
Pattern recognition
Costs

Keywords

  • Approximate activation function
  • Neural networks
  • Stochastic computing

Cite this

@article{a5c43c0f998b4e1a903e0b4d6d52be70,
title = "Neural network classifiers using a hardware-based approximate activation function with a hybrid stochastic multiplier",
abstract = "Neural networks are becoming prevalent in many areas, such as pattern recognition and medical diagnosis. Stochastic computing is one potential solution for neural networks implemented in low-power back-end devices such as solar-powered devices and Internet of Things (IoT) devices. In this article, we investigate a new architecture of stochastic neural networks with a hardware-oriented approximate activation function. The newly proposed approximate activation function can be hidden in the proposed architecture and thus reduce the whole hardware cost. Additionally, to further reduce the hardware cost of the stochastic implementation, a new hybrid stochastic multiplier is proposed. It contains OR gates and a binary parallel counter, which aims to reduce the number of inputs of the binary parallel counter. The experimental results indicate the newly proposed approximate architecture without hybrid stochastic multipliers achieves more than 25{\%}, 60{\%}, and 3x reduction compared to previous stochastic neural networks, and more than 30x, 30x, and 52{\%} reduction compared to conventional binary neural networks, in terms of area, power, and energy, respectively, while maintaining the similar error rates compared to the conventional neural networks. Furthermore, the stochastic implementation with hybrid stochastic multipliers further reduces area about 18{\%} to 80{\%}, power from 15{\%} to 113.1{\%}, and energy about 15{\%} to 131{\%}, respectively.",
keywords = "Approximate activation function, Neural networks, Stochastic computing",
author = "Bingzhe Li and Yaobin Qin and Bo Yuan and Lilja, {David J.}",
year = "2019",
month = "1",
doi = "10.1145/3284933",
language = "English (US)",
volume = "15",
journal = "ACM Journal on Emerging Technologies in Computing Systems",
issn = "1550-4832",
publisher = "Association for Computing Machinery (ACM)",
number = "1",

}

TY - JOUR

T1 - Neural network classifiers using a hardware-based approximate activation function with a hybrid stochastic multiplier

AU - Li, Bingzhe

AU - Qin, Yaobin

AU - Yuan, Bo

AU - Lilja, David J.

PY - 2019/1

Y1 - 2019/1

N2 - Neural networks are becoming prevalent in many areas, such as pattern recognition and medical diagnosis. Stochastic computing is one potential solution for neural networks implemented in low-power back-end devices such as solar-powered devices and Internet of Things (IoT) devices. In this article, we investigate a new architecture of stochastic neural networks with a hardware-oriented approximate activation function. The newly proposed approximate activation function can be hidden in the proposed architecture and thus reduce the whole hardware cost. Additionally, to further reduce the hardware cost of the stochastic implementation, a new hybrid stochastic multiplier is proposed. It contains OR gates and a binary parallel counter, which aims to reduce the number of inputs of the binary parallel counter. The experimental results indicate the newly proposed approximate architecture without hybrid stochastic multipliers achieves more than 25%, 60%, and 3x reduction compared to previous stochastic neural networks, and more than 30x, 30x, and 52% reduction compared to conventional binary neural networks, in terms of area, power, and energy, respectively, while maintaining the similar error rates compared to the conventional neural networks. Furthermore, the stochastic implementation with hybrid stochastic multipliers further reduces area about 18% to 80%, power from 15% to 113.1%, and energy about 15% to 131%, respectively.

AB - Neural networks are becoming prevalent in many areas, such as pattern recognition and medical diagnosis. Stochastic computing is one potential solution for neural networks implemented in low-power back-end devices such as solar-powered devices and Internet of Things (IoT) devices. In this article, we investigate a new architecture of stochastic neural networks with a hardware-oriented approximate activation function. The newly proposed approximate activation function can be hidden in the proposed architecture and thus reduce the whole hardware cost. Additionally, to further reduce the hardware cost of the stochastic implementation, a new hybrid stochastic multiplier is proposed. It contains OR gates and a binary parallel counter, which aims to reduce the number of inputs of the binary parallel counter. The experimental results indicate the newly proposed approximate architecture without hybrid stochastic multipliers achieves more than 25%, 60%, and 3x reduction compared to previous stochastic neural networks, and more than 30x, 30x, and 52% reduction compared to conventional binary neural networks, in terms of area, power, and energy, respectively, while maintaining the similar error rates compared to the conventional neural networks. Furthermore, the stochastic implementation with hybrid stochastic multipliers further reduces area about 18% to 80%, power from 15% to 113.1%, and energy about 15% to 131%, respectively.

KW - Approximate activation function

KW - Neural networks

KW - Stochastic computing

UR - http://www.scopus.com/inward/record.url?scp=85061088716&partnerID=8YFLogxK

UR - http://www.scopus.com/inward/citedby.url?scp=85061088716&partnerID=8YFLogxK

U2 - 10.1145/3284933

DO - 10.1145/3284933

M3 - Article

AN - SCOPUS:85061088716

VL - 15

JO - ACM Journal on Emerging Technologies in Computing Systems

JF - ACM Journal on Emerging Technologies in Computing Systems

SN - 1550-4832

IS - 1

M1 - 12

ER -