Model selection via standard error adjusted adaptive lasso

Wei Qian, Yuhong Yang

Research output: Contribution to journalArticlepeer-review

17 Scopus citations

Abstract

The adaptive lasso is a model selection method shown to be both consistent in variable selection and asymptotically normal in coefficient estimation. The actual variable selection performance of the adaptive lasso depends on the weight used. It turns out that the weight assignment using the OLS estimate (OLS-adaptive lasso) can result in very poor performance when collinearity of the model matrix is a concern. To achieve better variable selection results, we take into account the standard errors of the OLS estimate for weight calculation, and propose two different versions of the adaptive lasso denoted by SEA-lasso and NSEA-lasso. We show through numerical studies that when the predictors are highly correlated, SEA-lasso and NSEA-lasso can outperform OLS-adaptive lasso under a variety of linear regression settings while maintaining the same theoretical properties of the adaptive lasso.

Original languageEnglish (US)
Pages (from-to)295-318
Number of pages24
JournalAnnals of the Institute of Statistical Mathematics
Volume65
Issue number2
DOIs
StatePublished - Apr 2013

Bibliographical note

Funding Information:
Acknowledgments The authors thank two anonymous reviewers and the Associate Editor for their helpful comments for improving the presentation of the paper. The first author is grateful for a summer research scholarship for first year students by School of Statistics at the University of Minnesota.

Keywords

  • BIC
  • Model selection consistency
  • Solution path
  • Variable selection

Fingerprint

Dive into the research topics of 'Model selection via standard error adjusted adaptive lasso'. Together they form a unique fingerprint.

Cite this