With expansion of neural network (NN) applications lowering their hardware implementation cost becomes an urgent task especially in back-end applications where the power-supply is limited. Stochastic computing (SC) is a promising solution to realize low-cost hardware designs. Implementation of matrix multiplication has been a bottleneck in previous stochastic neural networks (SC-NNs). In this paper, we introduce spintronic components into the design of SC-NNs. A novel spin-CMOS matrix multiplier is proposed in which the stochastic multiplications are performed by CMOS AND gates while the sum of products is implemented by spintronic compressor gates. The experimental results indicate that compared to the conventional binary implementations the proposed hybrid spin-CMOS architecture can achieve over 125x, 4.5x and 43x; reduction in terms of power, energy and area consumptions, respectively. Moreover, compared to previous CMOS-based SC-NNs, our design saves the power by 3.1x-7.3x, reduces energy consumption by 3.1x-7.3x and decreases area by 1.4x-7.6x while maintaining similar recognition rates.
|Original language||English (US)|
|Title of host publication||GLSVLSI 2019 - Proceedings of the 2019 Great Lakes Symposium on VLSI|
|Publisher||Association for Computing Machinery|
|Number of pages||6|
|State||Published - May 13 2019|
|Event||29th Great Lakes Symposium on VLSI, GLSVLSI 2019 - Tysons Corner, United States|
Duration: May 9 2019 → May 11 2019
|Name||Proceedings of the ACM Great Lakes Symposium on VLSI, GLSVLSI|
|Conference||29th Great Lakes Symposium on VLSI, GLSVLSI 2019|
|Period||5/9/19 → 5/11/19|
Bibliographical noteFunding Information:
This work was supported in part by National Science Foundation grant no. CCF-1408123 and Seagate Technology.
© 2019 ACM.
- Neural network
- Stochastic computing