TY - JOUR
T1 - The adaptive lasso and its oracle properties
AU - Zou, Hui
PY - 2006/12
Y1 - 2006/12
N2 - The lasso is a popular technique for simultaneous estimation and variable selection. Lasso variable selection has been shown to be consistent under certain conditions. In this work we derive a necessary condition for the lasso variable selection to be consistent. Consequently, there exist certain scenarios where the lasso is inconsistent for variable selection. We then propose a new version of the lasso, called the adaptive lasso, where adaptive weights are used for penalising different coefficients in the ℓ1 penalty. We show that the adaptive lasso enjoys the oracle properties; namely, it performs as well us if the true underlying model were given in advance. Similar to the lasso, the adaptive lasso is shown to be near-minimax optimal. Furthermore, the adaptive lasso can be solved by the same efficient algorithm for solving the lasso. We also discuss the extension of the adaptive lasso in generalized linear models and show that the oracle properties still hold under mild regularity conditions. As a byproduct of our theory, the nonnegalive garotte is shown to be consistent for variable selection.
AB - The lasso is a popular technique for simultaneous estimation and variable selection. Lasso variable selection has been shown to be consistent under certain conditions. In this work we derive a necessary condition for the lasso variable selection to be consistent. Consequently, there exist certain scenarios where the lasso is inconsistent for variable selection. We then propose a new version of the lasso, called the adaptive lasso, where adaptive weights are used for penalising different coefficients in the ℓ1 penalty. We show that the adaptive lasso enjoys the oracle properties; namely, it performs as well us if the true underlying model were given in advance. Similar to the lasso, the adaptive lasso is shown to be near-minimax optimal. Furthermore, the adaptive lasso can be solved by the same efficient algorithm for solving the lasso. We also discuss the extension of the adaptive lasso in generalized linear models and show that the oracle properties still hold under mild regularity conditions. As a byproduct of our theory, the nonnegalive garotte is shown to be consistent for variable selection.
KW - Asymptotic normality
KW - Lasso
KW - Minimax
KW - Oracle inequality
KW - Oracle procedure
KW - Variable selection
UR - http://www.scopus.com/inward/record.url?scp=33846114377&partnerID=8YFLogxK
UR - http://www.scopus.com/inward/citedby.url?scp=33846114377&partnerID=8YFLogxK
U2 - 10.1198/016214506000000735
DO - 10.1198/016214506000000735
M3 - Article
AN - SCOPUS:33846114377
SN - 0162-1459
VL - 101
SP - 1418
EP - 1429
JO - Journal of the American Statistical Association
JF - Journal of the American Statistical Association
IS - 476
ER -