Generalized conjugate gradient methods for ℓ1 regularized convex quadratic programming with finite convergence

Zhaosong Lu, Xiaojun Chen

Research output: Contribution to journalArticlepeer-review

6 Scopus citations


The conjugate gradient (CG) method is an efficient iterative method for solving large-scale strongly convex quadratic programming (QP). In this paper, we propose some generalized CG (GCG) methods for solving the ℓ1 -regularized (possibly not strongly) convex QP that terminate at an optimal solution in a finite number of iterations. At each iteration, our methods first identify a face of an orthant and then either perform an exact line search along the direction of the negative projected minimum-norm subgradient of the objective function or execute a CG subroutine that conducts a sequence of CG iterations until a CG iterate crosses the boundary of this face or an approximate minimizer of over this face or a subface is found. We determine which type of step should be taken by comparing the magnitude of some components of the minimum-norm subgradient of the objective function to that of its rest components. Our analysis on finite convergence of these methods makes use of an error bound result and some key properties of the aforementioned exact line search and the CG subroutine. We also show that the proposed methods are capable of finding an approximate solution of the problem by allowing some inexactness on the execution of the CG subroutine. The overall arithmetic operation cost of our GCG methods for finding an ϵ-optimal solution depends on e in O(log(1/ϵ)), which is superior to the accelerated proximal gradient method (Beck and Teboulle [Beck A, Teboulle M (2009) A fast iterative shrinkage-thresholding algorithm for linear inverse problems. SIAM J. Imaging Sci. 2(1):183-202], Nesterov [Nesterov Yu (2013) Gradient methods for minimizing composite functions. Math. Program. 140(1):125-161]) that depends on e in O(1/√ϵ). In addition, our GCG methods can be extended straightforwardly to solve box-constrained convex QP with finite convergence. Numerical results demonstrate that our methods are very favorable for solving ill-conditioned problems.

Original languageEnglish (US)
Pages (from-to)275-303
Number of pages29
JournalMathematics of Operations Research
Issue number1
StatePublished - Feb 2018
Externally publishedYes

Bibliographical note

Funding Information:
Funding: The first author’s work was supported in part by Natural Sciences and Engineering Research Council of Canada. The second author’s work was supported in part by Hong Kong Research Council [Grant PolyU153000/15p].

Publisher Copyright:
© 2017 INFORMS.


  • Conjugate gradient method
  • Convex quadratic programming
  • Finite convergence
  • Sparse optimization
  • ℓ-regularization


Dive into the research topics of 'Generalized conjugate gradient methods for ℓ1 regularized convex quadratic programming with finite convergence'. Together they form a unique fingerprint.

Cite this