The adaptive lasso is a model selection method shown to be both consistent in variable selection and asymptotically normal in coefficient estimation. The actual variable selection performance of the adaptive lasso depends on the weight used. It turns out that the weight assignment using the OLS estimate (OLS-adaptive lasso) can result in very poor performance when collinearity of the model matrix is a concern. To achieve better variable selection results, we take into account the standard errors of the OLS estimate for weight calculation, and propose two different versions of the adaptive lasso denoted by SEA-lasso and NSEA-lasso. We show through numerical studies that when the predictors are highly correlated, SEA-lasso and NSEA-lasso can outperform OLS-adaptive lasso under a variety of linear regression settings while maintaining the same theoretical properties of the adaptive lasso.
|Original language||English (US)|
|Number of pages||24|
|Journal||Annals of the Institute of Statistical Mathematics|
|State||Published - Apr 2013|
Bibliographical noteFunding Information:
Acknowledgments The authors thank two anonymous reviewers and the Associate Editor for their helpful comments for improving the presentation of the paper. The first author is grateful for a summer research scholarship for first year students by School of Statistics at the University of Minnesota.
- Model selection consistency
- Solution path
- Variable selection