VLSI architectures for the restricted boltzmann machine

Research output: Contribution to journalArticle

2 Citations (Scopus)

Abstract

Neural network (NN) systems are widely used in many important applications ranging from computer vision to speech recognition. To date, most NN systems are processed by general processing units like CPUs or GPUs. However, as the sizes of dataset and network rapidly increase, the original software implementations suffer from long training time. To overcome this problem, specialized hardware accelerators are needed to design high-speed NN systems. This article presents an efficient hardware architecture of restricted Boltzmann machine (RBM) that is an important category of NN systems. Various optimization approaches at the hardware level are performed to improve the training speed. As-soon-as-possible and overlappedscheduling approaches are used to reduce the latency. It is shown that, compared with the flat design, the proposedRBMarchitecture can achieve 50% reduction in training time. In addition, an on-the-fly computation scheme is also used to reduce the storage requirement of binary and stochastic states by several hundreds of times. Then, based on the proposed approach, a 784-2252 RBM design example is developed for MNIST handwritten digit recognition dataset. Analysis shows that the VLSI design of RBM achieves significant improvement in training speed and energy efficiency as compared to CPU/GPU-based solution.

Original languageEnglish (US)
Article number35
JournalACM Journal on Emerging Technologies in Computing Systems
Volume13
Issue number3
DOIs
StatePublished - May 1 2017

Fingerprint

Neural networks
Hardware
Program processors
Machine design
Speech recognition
Computer vision
Particle accelerators
Energy efficiency
Processing
Graphics processing unit

Keywords

  • Restricted Boltzmann machine (RBM)
  • VLSI
  • high-speed
  • memory reduction
  • neural network (NN)
  • overlapped-scheduling
  • reduced-latency

Cite this

VLSI architectures for the restricted boltzmann machine. / Yuan, Bo; Parhi, Keshab K.

In: ACM Journal on Emerging Technologies in Computing Systems, Vol. 13, No. 3, 35, 01.05.2017.

Research output: Contribution to journalArticle

@article{41a3450aa52a4f12ae01703875201563,
title = "VLSI architectures for the restricted boltzmann machine",
abstract = "Neural network (NN) systems are widely used in many important applications ranging from computer vision to speech recognition. To date, most NN systems are processed by general processing units like CPUs or GPUs. However, as the sizes of dataset and network rapidly increase, the original software implementations suffer from long training time. To overcome this problem, specialized hardware accelerators are needed to design high-speed NN systems. This article presents an efficient hardware architecture of restricted Boltzmann machine (RBM) that is an important category of NN systems. Various optimization approaches at the hardware level are performed to improve the training speed. As-soon-as-possible and overlappedscheduling approaches are used to reduce the latency. It is shown that, compared with the flat design, the proposedRBMarchitecture can achieve 50{\%} reduction in training time. In addition, an on-the-fly computation scheme is also used to reduce the storage requirement of binary and stochastic states by several hundreds of times. Then, based on the proposed approach, a 784-2252 RBM design example is developed for MNIST handwritten digit recognition dataset. Analysis shows that the VLSI design of RBM achieves significant improvement in training speed and energy efficiency as compared to CPU/GPU-based solution.",
keywords = "Restricted Boltzmann machine (RBM), VLSI, high-speed, memory reduction, neural network (NN), overlapped-scheduling, reduced-latency",
author = "Bo Yuan and Parhi, {Keshab K}",
year = "2017",
month = "5",
day = "1",
doi = "10.1145/3007193",
language = "English (US)",
volume = "13",
journal = "ACM Journal on Emerging Technologies in Computing Systems",
issn = "1550-4832",
publisher = "Association for Computing Machinery (ACM)",
number = "3",

}

TY - JOUR

T1 - VLSI architectures for the restricted boltzmann machine

AU - Yuan, Bo

AU - Parhi, Keshab K

PY - 2017/5/1

Y1 - 2017/5/1

N2 - Neural network (NN) systems are widely used in many important applications ranging from computer vision to speech recognition. To date, most NN systems are processed by general processing units like CPUs or GPUs. However, as the sizes of dataset and network rapidly increase, the original software implementations suffer from long training time. To overcome this problem, specialized hardware accelerators are needed to design high-speed NN systems. This article presents an efficient hardware architecture of restricted Boltzmann machine (RBM) that is an important category of NN systems. Various optimization approaches at the hardware level are performed to improve the training speed. As-soon-as-possible and overlappedscheduling approaches are used to reduce the latency. It is shown that, compared with the flat design, the proposedRBMarchitecture can achieve 50% reduction in training time. In addition, an on-the-fly computation scheme is also used to reduce the storage requirement of binary and stochastic states by several hundreds of times. Then, based on the proposed approach, a 784-2252 RBM design example is developed for MNIST handwritten digit recognition dataset. Analysis shows that the VLSI design of RBM achieves significant improvement in training speed and energy efficiency as compared to CPU/GPU-based solution.

AB - Neural network (NN) systems are widely used in many important applications ranging from computer vision to speech recognition. To date, most NN systems are processed by general processing units like CPUs or GPUs. However, as the sizes of dataset and network rapidly increase, the original software implementations suffer from long training time. To overcome this problem, specialized hardware accelerators are needed to design high-speed NN systems. This article presents an efficient hardware architecture of restricted Boltzmann machine (RBM) that is an important category of NN systems. Various optimization approaches at the hardware level are performed to improve the training speed. As-soon-as-possible and overlappedscheduling approaches are used to reduce the latency. It is shown that, compared with the flat design, the proposedRBMarchitecture can achieve 50% reduction in training time. In addition, an on-the-fly computation scheme is also used to reduce the storage requirement of binary and stochastic states by several hundreds of times. Then, based on the proposed approach, a 784-2252 RBM design example is developed for MNIST handwritten digit recognition dataset. Analysis shows that the VLSI design of RBM achieves significant improvement in training speed and energy efficiency as compared to CPU/GPU-based solution.

KW - Restricted Boltzmann machine (RBM)

KW - VLSI

KW - high-speed

KW - memory reduction

KW - neural network (NN)

KW - overlapped-scheduling

KW - reduced-latency

UR - http://www.scopus.com/inward/record.url?scp=85019913374&partnerID=8YFLogxK

UR - http://www.scopus.com/inward/citedby.url?scp=85019913374&partnerID=8YFLogxK

U2 - 10.1145/3007193

DO - 10.1145/3007193

M3 - Article

AN - SCOPUS:85019913374

VL - 13

JO - ACM Journal on Emerging Technologies in Computing Systems

JF - ACM Journal on Emerging Technologies in Computing Systems

SN - 1550-4832

IS - 3

M1 - 35

ER -