Convergence rate analysis of a sequential convex programming method with line search for a class of constrained difference-of-convex optimization problems

Peiran Yu, Ting Kei Pong, Zhaosong Lu

Research output: Contribution to journalArticlepeer-review

2 Scopus citations


In this paper, we study the sequential convex programming method with monotone line search (SCPls) in [Z. Lu, Sequential Convex Programming Methods for a Class of Structured Nonlinear Programming,, 2012] for a class of difference-of-convex optimization problems with multiple smooth inequality constraints. The SCPls is a representative variant of moving-ball-approximation-type algorithms [A. Auslender, R. Shefi, and M. Teboulle, SIAM J. Optim., 20 (2010), pp. 3232-3259; J. Bolte, Z. Chen, and E. Pauwels, Math. Program., 182 (2020) pp. 1-36; J. Bolte and E. Pauwels, Math. Oper. Res., 41 (2016), pp. 442-465; R. Shefi and M. Teboulle, Math. Program., 159 (2016), pp. 137-164] for constrained optimization problems. We analyze the convergence rate of the sequence generated by SCPls in both nonconvex and convex settings by imposing suitable Kurdyka-\Lojasiewicz (KL) assumptions. Specifically, in the nonconvex settings, we assume that a special potential function related to the objective and the constraints is a KL function, while in the convex settings we impose KL assumptions directly on the extended objective function (i.e., sum of the objective and the indicator function of the constraint set). A relationship between these two different KL assumptions is established in the convex settings under additional differentiability assumptions. We also discuss how to deduce the KL exponent of the extended objective function from its Lagrangian in the convex settings, under additional assumptions on the constraint functions. Thanks to this result, the extended objectives of some constrained optimization models such as minimizing ℓ1 subject to logistic/Poisson loss are found to be KL functions with exponent 1/2 under mild assumptions. To illustrate how our results can be applied, we consider SCPls for minimizing ℓ1-2 [P. Yin, Y. Lou, Q. He, and J. Xin, SIAM J. Sci. Comput., 37 (2015), pp. A536-A563] subject to residual error measured by ℓ2 norm/Lorentzian norm [R. E. Carrillo, A. B. Ramirez, G. R. Arce, K. E. Barner, and B. M. Sadler, EURASIP J. Adv. Signal Process. (2016), 108]. We first discuss how the various conditions required in our analysis can be verified, and then perform numerical experiments to illustrate the convergence behaviors of SCPls.

Original languageEnglish (US)
Pages (from-to)2024-2054
Number of pages31
JournalSIAM Journal on Optimization
Issue number3
StatePublished - 2021

Bibliographical note

Funding Information:
\ast Received by the editors January 21, 2020; accepted for publication (in revised form) May 17, 2021; published electronically August 9, 2021. \bfF \bfu \bfn \bfd \bfi \bfn \bfg : The second author was supported partly by Hong Kong Research Grants Council PolyU153005/17p. \dagger Applied Mathematics, Hong Kong Polytechnic University, Kowloon, Hong Kong (peiran.yu@, \ddagger Department of Industrial and Systems Engineering, University of Minnesota, Minneapolis, MN 55455 USA (

Publisher Copyright:
© 2021 Society for Industrial and Applied Mathematics.


  • Constrained optimization
  • Difference-of-convex optimization
  • Kurdyka-Łojasiewicz property
  • Linear convergence


Dive into the research topics of 'Convergence rate analysis of a sequential convex programming method with line search for a class of constrained difference-of-convex optimization problems'. Together they form a unique fingerprint.

Cite this