Abstract
Modern neural networks have revolutionized the fields of computer vision (CV) and Natural Language Processing (NLP). They are widely used for solving complex CV tasks and NLP tasks such as image classification, image generation, and machine translation. Most state-of-the-art neural networks are over-parameterized and require a high computational cost. One straightforward solution is to replace the layers of the networks with their low-rank tensor approximations using different tensor decomposition methods. This article reviews six tensor decomposition methods and illustrates their ability to compress model parameters of convolutional neural networks (CNNs), recurrent neural networks (RNNs) and Transformers. The accuracy of some compressed models can be higher than the original versions. Evaluations indicate that tensor decompositions can achieve significant reductions in model size, run-time and energy consumption, and are well suited for implementing neural networks on edge devices.
Original language | English (US) |
---|---|
Pages (from-to) | 8-28 |
Number of pages | 21 |
Journal | IEEE Circuits and Systems Magazine |
Volume | 23 |
Issue number | 2 |
DOIs | |
State | Published - 2023 |
Bibliographical note
Publisher Copyright:© 2001-2012 IEEE.
Keywords
- Tensor decomposition
- Tucker decomposition
- block-term decomposition
- canonical polyadic decomposition
- convolution neural network acceleration
- hierarchical Tucker decomposition
- model compression.
- recurrent neural network acceleration
- tensor ring decomposition
- tensor train decomposition
- transformer acceleration