The deployment constraints in practical applications necessitate the pruning of large-scale deep learning models, i.e., promoting their weight sparsity. As illustrated by the Lottery Ticket Hypothesis (LTH), pruning also has the potential of improving their generalization ability. At the core of LTH, iterative magnitude pruning (IMP) is the predominant pruning method to successfully find 'winning tickets'. Yet, the computation cost of IMP grows prohibitively as the targeted pruning ratio increases. To reduce the computation overhead, various efficient 'one-shot' pruning methods have been developed but these schemes are usually unable to find winning tickets as good as IMP. This raises the question of how to close the gap between pruning accuracy and pruning efficiency? To tackle it, we pursue the algorithmic advancement of model pruning. Specifically, we formulate the pruning problem from a fresh and novel viewpoint, bi-level optimization (BLO). We show that the BLO interpretation provides a technically-grounded optimization base for an efficient implementation of the pruning-retraining learning paradigm used in IMP. We also show that the proposed bi-level optimization-oriented pruning method (termed BIP) is a special class of BLO problems with a bi-linear problem structure. By leveraging such bi-linearity, we theoretically show that BIP can be solved as easily as first-order optimization, thus inheriting the computation efficiency. Through extensive experiments on both structured and unstructured pruning with 5 model architectures and 4 data sets, we demonstrate that BIP can find better winning tickets than IMP in most cases, and is computationally as efficient as the one-shot pruning schemes, demonstrating 2-7× speedup over IMP for the same level of model accuracy and sparsity. Codes are available at https://github.com/OPTML-Group/BiP.
|Original language||English (US)|
|Title of host publication||Advances in Neural Information Processing Systems 35 - 36th Conference on Neural Information Processing Systems, NeurIPS 2022|
|Editors||S. Koyejo, S. Mohamed, A. Agarwal, D. Belgrave, K. Cho, A. Oh|
|Publisher||Neural information processing systems foundation|
|State||Published - 2022|
|Event||36th Conference on Neural Information Processing Systems, NeurIPS 2022 - New Orleans, United States|
Duration: Nov 28 2022 → Dec 9 2022
|Name||Advances in Neural Information Processing Systems|
|Conference||36th Conference on Neural Information Processing Systems, NeurIPS 2022|
|Period||11/28/22 → 12/9/22|
Bibliographical noteFunding Information:
The work of Y. Zhang, Y. Yao, and S. Liu was supported by National Science Foundation (NSF) Grant IIS-2207052. The work of M. Hong was supported by NSF grants CIF-1910385 and CMMI-1727757. The work of Y. Wang was supported NSF grant CCF-1919117. The computing resources used in this work were also supported by the MIT-IBM Watson AI Lab, IBM Research and the Institute for Cyber-Enabled Research (ICER) at Michigan State University.
© 2022 Neural information processing systems foundation. All rights reserved.