Sparse penalized quantile regression is a useful tool for variable selection, robust estimation, and heteroscedasticity detection in high-dimensional data analysis. The computational issue of the sparse penalized quantile regression has not yet been fully resolved in the literature, due to nonsmoothness of the quantile regression loss function. We introduce fast alternating direction method of multipliers (ADMM) algorithms for computing the sparse penalized quantile regression. The convergence properties of the proposed algorithms are established. Numerical examples demonstrate the competitive performance of our algorithm: it significantly outperforms several other fast solvers for high-dimensional penalized quantile regression. Supplementary materials for this article are available online.
Bibliographical noteFunding Information:
This work is supported in part by NSF grant DMS-1505111, the 111 Project of China (B16002), National Science Foundation of China grants 11431002 and 11671029, and the Hong Kong Research Grants Council General Research Fund (14205314).
© 2018, © 2018 American Statistical Association and the American Society for Quality. © 2018, © Yuwen Gu and Jun Fan.
- Alternating direction method of multipliers
- Nonconvex penalty
- Quantile regression
- Variable selection