L1 Regularization in Two-Layer Neural Networks

Gen Li, Yuantao Gu, Jie Ding

Research output: Contribution to journalArticlepeer-review

2 Scopus citations


A crucial problem of neural networks is to select an architecture that strikes appropriate tradeoffs between underfitting and overfitting. This work shows that 1 regularizations for two-layer neural networks can control the generalization error and sparsify the input dimension. In particular, with an appropriate 1 regularization on the output layer, the network can produce a tight statistical risk. Moreover, an appropriate 1 regularization on the input layer leads to a risk bound that does not involve the input data dimension. The results also indicate that training a wide neural network with a suitable regularization provides an alternative bias-variance tradeoff to selecting from a candidate set of neural networks. Our analysis is based on a new integration of dimension-based and norm-based complexity analysis to bound the generalization error.

Original languageEnglish (US)
Pages (from-to)135-139
Number of pages5
JournalIEEE Signal Processing Letters
StatePublished - 2022

Bibliographical note

Publisher Copyright:
© 1994-2012 IEEE.


  • Generalization error
  • model complexity
  • neural network
  • regularization


Dive into the research topics of 'L1 Regularization in Two-Layer Neural Networks'. Together they form a unique fingerprint.

Cite this