Abstract
Sufficient dimension reduction (SDR) is known to be a powerful tool for achieving data reduction and data visualization in regression and classification problems. In this work, we study ultrahigh-dimensional SDR problems and propose solutions under a unified minimum discrepancy approach with regularization. When p grows exponentially with n, consistency results in both central subspace estimation and variable selection are established simultaneously for important SDR methods, including sliced inverse regression (SIR), principal fitted component (PFC), and sliced average variance estimation (SAVE). Special sparse structures of large predictor or error covariance are also considered for potentially better performance. In addition, the proposed approach is equipped with a new algorithm to efficiently solve the regularized objective functions and a new data-driven procedure to determine structural dimension and tuning parameters, without the need to invert a large covariance matrix. Simulations and a real data analysis are offered to demonstrate the promise of our proposal in ultrahigh-dimensional settings. Supplementary materials for this article are available online.
Original language | English (US) |
---|---|
Pages (from-to) | 1277-1290 |
Number of pages | 14 |
Journal | Journal of the American Statistical Association |
Volume | 114 |
Issue number | 527 |
DOIs | |
State | Published - Jul 3 2019 |
Bibliographical note
Funding Information:Ding?s research is partially supported by DE-CTR ACCEL/NIH U54 GM104941 SHoRe award, and the University of Delaware GUR award. The authors sincerely thank the editor, the associate editor, and anonymous referees for their valuable comments that help improving this article significantly.
Publisher Copyright:
© 2018, © 2018 American Statistical Association.
Keywords
- Central subspace
- Inverse regression
- Principal fitted component
- Sliced average variance estimation
- Sliced inverse regression
- Sparsity