Adaptive regularization using the entire solution surface

Research output: Contribution to journalArticlepeer-review

10 Scopus citations


Several sparseness penalties have been suggested for delivery of good predictive performance in automatic variable selection within the framework of regularization. All assume that the true model is sparse. We propose a penalty, a convex combination of the L1- and L∞-norms, that adapts to a variety of situations including sparseness and nonsparseness, grouping and nongrouping. The proposed penalty performs grouping and adaptive regularization. In addition, we introduce a novel homotopy algorithm utilizing subgradients for developing regularization solution surfaces involving multiple regularizers. This permits efficient computation and adaptive tuning. Numerical experiments are conducted using simulation. In simulated and real examples, the proposed penalty compares well against popular alternatives.

Original languageEnglish (US)
Pages (from-to)513-527
Number of pages15
Issue number3
StatePublished - Sep 2009

Bibliographical note

Funding Information:
This research was supported in part by grants from the U.S. National Science Foundation and National Institutes of Health.


  • Homotopy
  • L1-norm
  • Lasso
  • L∞-norm
  • Subgradient
  • Support vector machine
  • Variable grouping and selection


Dive into the research topics of 'Adaptive regularization using the entire solution surface'. Together they form a unique fingerprint.

Cite this