TY - JOUR
T1 - SpikeSim
T2 - An End-to-End Compute-in-Memory Hardware Evaluation Tool for Benchmarking Spiking Neural Networks
AU - Moitra, Abhishek
AU - Bhattacharjee, Abhiroop
AU - Kuang, Runcong
AU - Krishnan, Gokul
AU - Cao, Yu
AU - Panda, Priyadarshini
N1 - Publisher Copyright:
© 1982-2012 IEEE.
PY - 2023/11/1
Y1 - 2023/11/1
N2 - Spiking neural networks (SNNs) are an active research domain toward energy-efficient machine intelligence. Compared to conventional artificial neural networks (ANNs), SNNs use temporal spike data and bio-plausible neuronal activation functions such as leaky-integrate fire/integrate fire (LIF/IF) for data processing. However, SNNs incur significant dot-product operations causing high memory and computation overhead in standard von-Neumann computing platforms. To this end, in-memory computing (IMC) architectures have been proposed to alleviate the 'memory-wall bottleneck' prevalent in von-Neumann architectures. Although recent works have proposed IMC-based SNN hardware accelerators, the following key implementation aspects have been overlooked: 1) the adverse effects of crossbar nonideality on SNN performance due to repeated analog dot-product operations over multiple time-steps and 2) hardware overheads of essential SNN-specific components, such as the LIF/IF and data communication modules. To this end, we propose SpikeSim, a tool that can perform realistic performance, energy, latency and area evaluation of IMC-mapped SNNs. SpikeSim consists of a practical monolithic IMC architecture called SpikeFlow for mapping SNNs. Additionally, the nonideality computation engine (NICE) and energy-latency-area (ELA) engine performs hardware-realistic evaluation of SpikeFlow-mapped SNNs. Based on 65nm CMOS implementation and experiments on CIFAR10, CIFAR100 and TinyImagenet datasets, we find that the LIF/IF neuronal module has significant area contribution (>11% of the total hardware area). To this end, we propose SNN topological modifications that leads to 1.24× and 10× reduction in the neuronal module's area and the overall energy-delay-product value, respectively. Furthermore, in this work, we perform a holistic comparison between IMC implemented ANN and SNNs and conclude that lower number of time-steps are the key to achieve higher throughput and energy-efficiency for SNNs compared to 4-bit ANNs. The code repository for the SpikeSim tool is available at Github link.
AB - Spiking neural networks (SNNs) are an active research domain toward energy-efficient machine intelligence. Compared to conventional artificial neural networks (ANNs), SNNs use temporal spike data and bio-plausible neuronal activation functions such as leaky-integrate fire/integrate fire (LIF/IF) for data processing. However, SNNs incur significant dot-product operations causing high memory and computation overhead in standard von-Neumann computing platforms. To this end, in-memory computing (IMC) architectures have been proposed to alleviate the 'memory-wall bottleneck' prevalent in von-Neumann architectures. Although recent works have proposed IMC-based SNN hardware accelerators, the following key implementation aspects have been overlooked: 1) the adverse effects of crossbar nonideality on SNN performance due to repeated analog dot-product operations over multiple time-steps and 2) hardware overheads of essential SNN-specific components, such as the LIF/IF and data communication modules. To this end, we propose SpikeSim, a tool that can perform realistic performance, energy, latency and area evaluation of IMC-mapped SNNs. SpikeSim consists of a practical monolithic IMC architecture called SpikeFlow for mapping SNNs. Additionally, the nonideality computation engine (NICE) and energy-latency-area (ELA) engine performs hardware-realistic evaluation of SpikeFlow-mapped SNNs. Based on 65nm CMOS implementation and experiments on CIFAR10, CIFAR100 and TinyImagenet datasets, we find that the LIF/IF neuronal module has significant area contribution (>11% of the total hardware area). To this end, we propose SNN topological modifications that leads to 1.24× and 10× reduction in the neuronal module's area and the overall energy-delay-product value, respectively. Furthermore, in this work, we perform a holistic comparison between IMC implemented ANN and SNNs and conclude that lower number of time-steps are the key to achieve higher throughput and energy-efficiency for SNNs compared to 4-bit ANNs. The code repository for the SpikeSim tool is available at Github link.
KW - Analog crossbars
KW - emerging devices
KW - in-memory computing (IMC)
KW - spiking neural networks (SNNs)
UR - http://www.scopus.com/inward/record.url?scp=85159833538&partnerID=8YFLogxK
UR - http://www.scopus.com/inward/citedby.url?scp=85159833538&partnerID=8YFLogxK
U2 - 10.1109/TCAD.2023.3274918
DO - 10.1109/TCAD.2023.3274918
M3 - Article
AN - SCOPUS:85159833538
SN - 0278-0070
VL - 42
SP - 3815
EP - 3828
JO - IEEE Transactions on Computer-Aided Design of Integrated Circuits and Systems
JF - IEEE Transactions on Computer-Aided Design of Integrated Circuits and Systems
IS - 11
ER -