Sliced inverse regression (SIR) is an innovative and effective method for dimension reduction and data visualization of high-dimensional problems. It replaces the original variables with low-dimensional linear combinations of predictors without any loss of regression information and without the need to prespecify a model or an error distribution. However, it suffers from the fact that each SIR component is a linear combination of all the original predictors; thus, it is often difficult to interpret the extracted components. By representing SIR as a regression-type optimization problem, we propose in this article a new method, called sparse SIR, that combines the shrinkage idea of the lasso with SIR to produce both accurate and sparse solutions. The efficacy of the proposed method is verified by simulation, and a real data example is given.
|Original language||English (US)|
|Number of pages||8|
|State||Published - Nov 2006|
Bibliographical noteFunding Information:
Li’s research was supported by NIH grant ES11269 when he was a postdoctoral researcher in 2004 at University of California, Davis. The authors thank the editor, the associate editor, and two anonymous referees for comments and suggestions that led to significant improvements in this article.
- Regression shrinkage
- Sliced inverse regression
- Sufficient dimension reduction