Shallowing deep networks: Layer-wise pruning based on feature representations

Shi Chen, Qi Zhao

Research output: Contribution to journalArticlepeer-review

96 Scopus citations

Abstract

Recent surge of Convolutional Neural Networks (CNNs) has brought successes among various applications. However, these successes are accompanied by a significant increase in computational cost and the demand for computational resources, which critically hampers the utilization of complex CNNs on devices with limited computational power. In this work, we propose a feature representation based layer-wise pruning method that aims at reducing complex CNNs to more compact ones with equivalent performance. Different from previous parameter pruning methods that conduct connection-wise or filter-wise pruning based on weight information, our method determines redundant parameters by investigating the features learned in the convolutional layers and the pruning process is operated at a layer level. Experiments demonstrate that the proposed method is able to significantly reduce computational cost and the pruned models achieve equivalent or even better performance compared to the original models on various datasets.

Original languageEnglish (US)
Article number8485719
Pages (from-to)3048-3056
Number of pages9
JournalIEEE Transactions on Pattern Analysis and Machine Intelligence
Volume41
Issue number12
DOIs
StatePublished - Dec 1 2019

Bibliographical note

Publisher Copyright:
© 1979-2012 IEEE.

Keywords

  • Model pruning
  • compact design
  • convolutional neural networks

Fingerprint

Dive into the research topics of 'Shallowing deep networks: Layer-wise pruning based on feature representations'. Together they form a unique fingerprint.

Cite this