PermDNN: Efficient compressed DNN architecture with permuted diagonal matrices

Chunhua Deng, Siyu Liao, Yi Xie, Keshab K Parhi, Xuehai Qian, Bo Yuan

Research output: Chapter in Book/Report/Conference proceedingConference contribution

9 Citations (Scopus)

Abstract

Deep neural network (DNN) has emerged as the most important and popular artificial intelligent (AI) technique. The growth of model size poses a key energy efficiency challenge for the underlying computing platform. Thus, model compression becomes a crucial problem. However, the current approaches are limited by various drawbacks. Specifically, network sparsification approach suffers from irregularity, heuristic nature and large indexing overhead. On the other hand, the recent structured matrix-based approach (i.e., CirCNN) is limited by the relatively complex arithmetic computation (i.e., FFT), less flexible compression ratio, and its inability to fully utilize input sparsity. To address these drawbacks, this paper proposes PermDNN, a novel approach to generate and execute hardware-friendly structured sparse DNN models using permuted diagonal matrices. Compared with unstructured sparsification approach, PermDNN eliminates the drawbacks of indexing overhead, non-heuristic compression effects and time-consuming retraining. Compared with circulant structure-imposing approach, PermDNN enjoys the benefits of higher reduction in computational complexity, flexible compression ratio, simple arithmetic computation and full utilization of input sparsity. We propose PermDNN architecture, a multi-processing element (PE) fully-connected (FC) layer-Targeted computing engine. The entire architecture is highly scalable and flexible, and hence it can support the needs of different applications with different model configurations. We implement a 32-PE design using CMOS 28nm technology. Compared with EIE, PermDNN achieves 3.3x~4.8x higher throughout, 5.9x~8.5x better area efficiency and 2.8x~4.0x better energy efficiency on different workloads. Compared with CirCNN, PermDNN achieves 11.51x higher throughput and 3.89x better energy efficiency.

Original languageEnglish (US)
Title of host publicationProceedings - 51st Annual IEEE/ACM International Symposium on Microarchitecture, MICRO 2018
PublisherIEEE Computer Society
Pages189-202
Number of pages14
ISBN (Electronic)9781538662403
DOIs
StatePublished - Dec 12 2018
Event51st Annual IEEE/ACM International Symposium on Microarchitecture, MICRO 2018 - Fukuoka, Japan
Duration: Oct 20 2018Oct 24 2018

Publication series

NameProceedings of the Annual International Symposium on Microarchitecture, MICRO
Volume2018-October
ISSN (Print)1072-4451

Other

Other51st Annual IEEE/ACM International Symposium on Microarchitecture, MICRO 2018
CountryJapan
CityFukuoka
Period10/20/1810/24/18

Fingerprint

Network architecture
Energy efficiency
Processing
Fast Fourier transforms
Computational complexity
Throughput
Engines
Hardware
Deep neural networks

Keywords

  • Deep Learning
  • Mold compression
  • VLSI

Cite this

Deng, C., Liao, S., Xie, Y., Parhi, K. K., Qian, X., & Yuan, B. (2018). PermDNN: Efficient compressed DNN architecture with permuted diagonal matrices. In Proceedings - 51st Annual IEEE/ACM International Symposium on Microarchitecture, MICRO 2018 (pp. 189-202). [8574541] (Proceedings of the Annual International Symposium on Microarchitecture, MICRO; Vol. 2018-October). IEEE Computer Society. https://doi.org/10.1109/MICRO.2018.00024

PermDNN : Efficient compressed DNN architecture with permuted diagonal matrices. / Deng, Chunhua; Liao, Siyu; Xie, Yi; Parhi, Keshab K; Qian, Xuehai; Yuan, Bo.

Proceedings - 51st Annual IEEE/ACM International Symposium on Microarchitecture, MICRO 2018. IEEE Computer Society, 2018. p. 189-202 8574541 (Proceedings of the Annual International Symposium on Microarchitecture, MICRO; Vol. 2018-October).

Research output: Chapter in Book/Report/Conference proceedingConference contribution

Deng, C, Liao, S, Xie, Y, Parhi, KK, Qian, X & Yuan, B 2018, PermDNN: Efficient compressed DNN architecture with permuted diagonal matrices. in Proceedings - 51st Annual IEEE/ACM International Symposium on Microarchitecture, MICRO 2018., 8574541, Proceedings of the Annual International Symposium on Microarchitecture, MICRO, vol. 2018-October, IEEE Computer Society, pp. 189-202, 51st Annual IEEE/ACM International Symposium on Microarchitecture, MICRO 2018, Fukuoka, Japan, 10/20/18. https://doi.org/10.1109/MICRO.2018.00024
Deng C, Liao S, Xie Y, Parhi KK, Qian X, Yuan B. PermDNN: Efficient compressed DNN architecture with permuted diagonal matrices. In Proceedings - 51st Annual IEEE/ACM International Symposium on Microarchitecture, MICRO 2018. IEEE Computer Society. 2018. p. 189-202. 8574541. (Proceedings of the Annual International Symposium on Microarchitecture, MICRO). https://doi.org/10.1109/MICRO.2018.00024
Deng, Chunhua ; Liao, Siyu ; Xie, Yi ; Parhi, Keshab K ; Qian, Xuehai ; Yuan, Bo. / PermDNN : Efficient compressed DNN architecture with permuted diagonal matrices. Proceedings - 51st Annual IEEE/ACM International Symposium on Microarchitecture, MICRO 2018. IEEE Computer Society, 2018. pp. 189-202 (Proceedings of the Annual International Symposium on Microarchitecture, MICRO).
@inproceedings{8e9c2ae5f57f4427b764f762750d9e4d,
title = "PermDNN: Efficient compressed DNN architecture with permuted diagonal matrices",
abstract = "Deep neural network (DNN) has emerged as the most important and popular artificial intelligent (AI) technique. The growth of model size poses a key energy efficiency challenge for the underlying computing platform. Thus, model compression becomes a crucial problem. However, the current approaches are limited by various drawbacks. Specifically, network sparsification approach suffers from irregularity, heuristic nature and large indexing overhead. On the other hand, the recent structured matrix-based approach (i.e., CirCNN) is limited by the relatively complex arithmetic computation (i.e., FFT), less flexible compression ratio, and its inability to fully utilize input sparsity. To address these drawbacks, this paper proposes PermDNN, a novel approach to generate and execute hardware-friendly structured sparse DNN models using permuted diagonal matrices. Compared with unstructured sparsification approach, PermDNN eliminates the drawbacks of indexing overhead, non-heuristic compression effects and time-consuming retraining. Compared with circulant structure-imposing approach, PermDNN enjoys the benefits of higher reduction in computational complexity, flexible compression ratio, simple arithmetic computation and full utilization of input sparsity. We propose PermDNN architecture, a multi-processing element (PE) fully-connected (FC) layer-Targeted computing engine. The entire architecture is highly scalable and flexible, and hence it can support the needs of different applications with different model configurations. We implement a 32-PE design using CMOS 28nm technology. Compared with EIE, PermDNN achieves 3.3x~4.8x higher throughout, 5.9x~8.5x better area efficiency and 2.8x~4.0x better energy efficiency on different workloads. Compared with CirCNN, PermDNN achieves 11.51x higher throughput and 3.89x better energy efficiency.",
keywords = "Deep Learning, Mold compression, VLSI",
author = "Chunhua Deng and Siyu Liao and Yi Xie and Parhi, {Keshab K} and Xuehai Qian and Bo Yuan",
year = "2018",
month = "12",
day = "12",
doi = "10.1109/MICRO.2018.00024",
language = "English (US)",
series = "Proceedings of the Annual International Symposium on Microarchitecture, MICRO",
publisher = "IEEE Computer Society",
pages = "189--202",
booktitle = "Proceedings - 51st Annual IEEE/ACM International Symposium on Microarchitecture, MICRO 2018",

}

TY - GEN

T1 - PermDNN

T2 - Efficient compressed DNN architecture with permuted diagonal matrices

AU - Deng, Chunhua

AU - Liao, Siyu

AU - Xie, Yi

AU - Parhi, Keshab K

AU - Qian, Xuehai

AU - Yuan, Bo

PY - 2018/12/12

Y1 - 2018/12/12

N2 - Deep neural network (DNN) has emerged as the most important and popular artificial intelligent (AI) technique. The growth of model size poses a key energy efficiency challenge for the underlying computing platform. Thus, model compression becomes a crucial problem. However, the current approaches are limited by various drawbacks. Specifically, network sparsification approach suffers from irregularity, heuristic nature and large indexing overhead. On the other hand, the recent structured matrix-based approach (i.e., CirCNN) is limited by the relatively complex arithmetic computation (i.e., FFT), less flexible compression ratio, and its inability to fully utilize input sparsity. To address these drawbacks, this paper proposes PermDNN, a novel approach to generate and execute hardware-friendly structured sparse DNN models using permuted diagonal matrices. Compared with unstructured sparsification approach, PermDNN eliminates the drawbacks of indexing overhead, non-heuristic compression effects and time-consuming retraining. Compared with circulant structure-imposing approach, PermDNN enjoys the benefits of higher reduction in computational complexity, flexible compression ratio, simple arithmetic computation and full utilization of input sparsity. We propose PermDNN architecture, a multi-processing element (PE) fully-connected (FC) layer-Targeted computing engine. The entire architecture is highly scalable and flexible, and hence it can support the needs of different applications with different model configurations. We implement a 32-PE design using CMOS 28nm technology. Compared with EIE, PermDNN achieves 3.3x~4.8x higher throughout, 5.9x~8.5x better area efficiency and 2.8x~4.0x better energy efficiency on different workloads. Compared with CirCNN, PermDNN achieves 11.51x higher throughput and 3.89x better energy efficiency.

AB - Deep neural network (DNN) has emerged as the most important and popular artificial intelligent (AI) technique. The growth of model size poses a key energy efficiency challenge for the underlying computing platform. Thus, model compression becomes a crucial problem. However, the current approaches are limited by various drawbacks. Specifically, network sparsification approach suffers from irregularity, heuristic nature and large indexing overhead. On the other hand, the recent structured matrix-based approach (i.e., CirCNN) is limited by the relatively complex arithmetic computation (i.e., FFT), less flexible compression ratio, and its inability to fully utilize input sparsity. To address these drawbacks, this paper proposes PermDNN, a novel approach to generate and execute hardware-friendly structured sparse DNN models using permuted diagonal matrices. Compared with unstructured sparsification approach, PermDNN eliminates the drawbacks of indexing overhead, non-heuristic compression effects and time-consuming retraining. Compared with circulant structure-imposing approach, PermDNN enjoys the benefits of higher reduction in computational complexity, flexible compression ratio, simple arithmetic computation and full utilization of input sparsity. We propose PermDNN architecture, a multi-processing element (PE) fully-connected (FC) layer-Targeted computing engine. The entire architecture is highly scalable and flexible, and hence it can support the needs of different applications with different model configurations. We implement a 32-PE design using CMOS 28nm technology. Compared with EIE, PermDNN achieves 3.3x~4.8x higher throughout, 5.9x~8.5x better area efficiency and 2.8x~4.0x better energy efficiency on different workloads. Compared with CirCNN, PermDNN achieves 11.51x higher throughput and 3.89x better energy efficiency.

KW - Deep Learning

KW - Mold compression

KW - VLSI

UR - http://www.scopus.com/inward/record.url?scp=85060012393&partnerID=8YFLogxK

UR - http://www.scopus.com/inward/citedby.url?scp=85060012393&partnerID=8YFLogxK

U2 - 10.1109/MICRO.2018.00024

DO - 10.1109/MICRO.2018.00024

M3 - Conference contribution

AN - SCOPUS:85060012393

T3 - Proceedings of the Annual International Symposium on Microarchitecture, MICRO

SP - 189

EP - 202

BT - Proceedings - 51st Annual IEEE/ACM International Symposium on Microarchitecture, MICRO 2018

PB - IEEE Computer Society

ER -