Sparse Progressive Distillation: Resolving Overfitting under Pretrain-and-Finetune Paradigm

  • Shaoyi Huang
  • , Dongkuan Xu
  • , Ian En Hsu Yen
  • , Yijue Wang
  • , Sung En Chang
  • , Bingbing Li
  • , Shiyang Chen
  • , Mimi Xie
  • , Sanguthevar Rajasekaran
  • , Hang Liu
  • , Caiwen Ding

Research output: Chapter in Book/Report/Conference proceedingConference contribution

12 Scopus citations

Abstract

Conventional wisdom in pruning Transformer-based language models is that pruning reduces the model expressiveness and thus is more likely to underfit rather than overfit. However, under the trending pretrain-and-finetune paradigm, we postulate a counter-traditional hypothesis, that is: pruning increases the risk of overfitting when performed at the fine-tuning phase. In this paper, we aim to address the overfitting problem and improve pruning performance via progressive knowledge distillation with error-bound properties. We show for the first time that reducing the risk of overfitting can help the effectiveness of pruning under the pretrain-and-finetune paradigm. Ablation studies and experiments on the GLUE benchmark show that our method outperforms the leading competitors across different tasks.

Original languageEnglish (US)
Title of host publicationACL 2022 - 60th Annual Meeting of the Association for Computational Linguistics, Proceedings of the Conference (Long Papers)
EditorsSmaranda Muresan, Preslav Nakov, Aline Villavicencio
PublisherAssociation for Computational Linguistics (ACL)
Pages190-200
Number of pages11
ISBN (Electronic)9781955917216
DOIs
StatePublished - 2022
Externally publishedYes
Event60th Annual Meeting of the Association for Computational Linguistics, ACL 2022 - Dublin, Ireland
Duration: May 22 2022May 27 2022

Publication series

NameProceedings of the Annual Meeting of the Association for Computational Linguistics
Volume1
ISSN (Print)0736-587X

Conference

Conference60th Annual Meeting of the Association for Computational Linguistics, ACL 2022
Country/TerritoryIreland
CityDublin
Period5/22/225/27/22

Bibliographical note

Publisher Copyright:
© 2022 Association for Computational Linguistics.

Fingerprint

Dive into the research topics of 'Sparse Progressive Distillation: Resolving Overfitting under Pretrain-and-Finetune Paradigm'. Together they form a unique fingerprint.

Cite this