Wild residual bootstrap inference for penalized quantile regression with heteroscedastic errors

Lan Wang, Ingrid Van Keilegom, Adam Maidman

Research output: Contribution to journalArticle

1 Scopus citations

Abstract

We consider a heteroscedastic regression model in which some of the regression coefficients are zero but it is not known which ones. Penalized quantile regression is a useful approach for analysing such data. By allowing different covariates to be relevant for modelling conditional quantile functions at different quantile levels, it provides a more complete picture of the conditional distribution of a response variable than mean regression. Existing work on penalized quantile regression has been mostly focused on point estimation. Although bootstrap procedures have recently been shown to be effective for inference for penalized mean regression, they are not directly applicable to penalized quantile regression with heteroscedastic errors.We prove that a wild residual bootstrap procedure for unpenalized quantile regression is asymptotically valid for approximating the distribution of a penalized quantile regression estimator with an adaptive L1 penalty and that a modified version can be used to approximate the distribution of a L1-penalized quantile regression estimator. The new methods do not require estimation of the unknown error density function.We establish consistency, demonstrate finite-sample performance, and illustrate the applications on a real data example.

Original languageEnglish (US)
Pages (from-to)859-872
Number of pages14
JournalBiometrika
Volume105
Issue number4
DOIs
StatePublished - Dec 1 2018

    Fingerprint

Keywords

  • Adaptive lasso
  • Confidence interval
  • Lasso
  • Penalized quantile regression
  • Wild bootstrap.

Cite this