Abstract
A technique for compressing deep neural models achieving competitive performance to state-of-the-art methods is proposed. The approach utilizes the mutual information between the feature maps and the output of the model in order to prune the redundant layers of the network. Extensive numerical experiments on both CIFAR-10, CIFAR-100, and Tiny ImageNet data sets demonstrate that the proposed method can be effective in compressing deep models, both in terms of the numbers of parameters and operations. For instance, by applying the proposed approach to DenseNet model with 0.77 million parameters and 293 million operations for classification of CIFAR-10 data set, a reduction of 62.66% and 41.00% in the number of parameters and the number of operations are respectively achieved, while increasing the test error only by less than 1%.
Original language | English (US) |
---|---|
Title of host publication | Proceedings of ICPR 2020 - 25th International Conference on Pattern Recognition |
Publisher | Institute of Electrical and Electronics Engineers Inc. |
Pages | 6988-6995 |
Number of pages | 8 |
ISBN (Electronic) | 9781728188089 |
DOIs | |
State | Published - 2020 |
Event | 25th International Conference on Pattern Recognition, ICPR 2020 - Virtual, Milan, Italy Duration: Jan 10 2021 → Jan 15 2021 |
Publication series
Name | Proceedings - International Conference on Pattern Recognition |
---|---|
ISSN (Print) | 1051-4651 |
Conference
Conference | 25th International Conference on Pattern Recognition, ICPR 2020 |
---|---|
Country/Territory | Italy |
City | Virtual, Milan |
Period | 1/10/21 → 1/15/21 |
Bibliographical note
Publisher Copyright:© 2020 IEEE
Keywords
- Deep neural compression
- Feature maps
- Mutual information