Abstract
Although Transformer-based deep learning models have been widely used in many natural language processing (NLP) tasks as well as computer vision, they suffer from gigantic model size and long latency. Network pruning can reduce the computational cost and model size. However, existing works mainly focus on irregular(sparse) pruning, which often causes irregular computations and extra indices per remained weight. In this work, we propose a Tensor-core inspired hierarchical model compression method to push the performance limit on modern GPUs. We present two modes of the two-step process. In the first mode, we use the Tensor-core aware block-based weight pruning method to exploit model sparsity in a coarse-grained manner and then use low-rank [33] decomposition to further reduce the weight storage in a fine-grained manner.In the second mode, we first use irregular pruning to achieve a highly sparse model and then apply the Tensor-core aware weight constraint on the sparse model to decompose the sparse matrix to several smaller but Tensor-core friendly sub-matrices. Experiments on Transformer, BERTBASE models show the proposed method outperforms the state-of-The-Art.
Original language | English (US) |
---|---|
Title of host publication | GLSVLSI 2021 - Proceedings of the 2021 Great Lakes Symposium on VLSI |
Publisher | Association for Computing Machinery |
Pages | 169-174 |
Number of pages | 6 |
ISBN (Electronic) | 9781450383936 |
DOIs | |
State | Published - Jun 22 2021 |
Externally published | Yes |
Event | 31st Great Lakes Symposium on VLSI, GLSVLSI 2021 - Virtual, Online, United States Duration: Jun 22 2021 → Jun 25 2021 |
Publication series
Name | Proceedings of the ACM Great Lakes Symposium on VLSI, GLSVLSI |
---|
Conference
Conference | 31st Great Lakes Symposium on VLSI, GLSVLSI 2021 |
---|---|
Country/Territory | United States |
City | Virtual, Online |
Period | 6/22/21 → 6/25/21 |
Bibliographical note
Publisher Copyright:© 2021 ACM.
Keywords
- bert
- block weight pruning
- low-rank
- tensor-core
- transformer