On quadratic convergence of DC proximal Newton algorithm in nonconvex sparse learning

Xingguo Li, Lin F. Yang, Jason Ge, Jarvis D Haupt, Tong Zhang, Tuo Zhao

Research output: Contribution to journalConference articlepeer-review

8 Scopus citations


We propose a DC proximal Newton algorithm for solving nonconvex regularized sparse learning problems in high dimensions. Our proposed algorithm integrates the proximal Newton algorithm with multi-stage convex relaxation based on the difference of convex (DC) programming, and enjoys both strong computational and statistical guarantees. Specifically, by leveraging a sophisticated characterization of sparse modeling structures (i.e., local restricted strong convexity and Hessian smoothness), we prove that within each stage of convex relaxation, our proposed algorithm achieves (local) quadratic convergence, and eventually obtains a sparse approximate local optimum with optimal statistical properties after only a few convex relaxations. Numerical experiments are provided to support our theory.

Original languageEnglish (US)
Pages (from-to)2743-2753
Number of pages11
JournalAdvances in Neural Information Processing Systems
StatePublished - 2017
Event31st Annual Conference on Neural Information Processing Systems, NIPS 2017 - Long Beach, United States
Duration: Dec 4 2017Dec 9 2017

Bibliographical note

Funding Information:
⇤The work was done while the author was at Johns Hopkins University. †The authors acknowledge support from DARPA YFA N66001-14-1-4047, NSF Grant IIS-1447639, and Doctoral Dissertation Fellowship from University of Minnesota. Correspondence to: Xingguo Li <lixx1661@umn.edu> and Tuo Zhao <tuo.zhao@isye.gatech.edu>.

Publisher Copyright:
© 2017 Neural information processing systems foundation. All rights reserved.


Dive into the research topics of 'On quadratic convergence of DC proximal Newton algorithm in nonconvex sparse learning'. Together they form a unique fingerprint.

Cite this