Randomized block proximal damped Newton method for composite self-concordant minimization

Research output: Contribution to journalArticlepeer-review

5 Scopus citations


In this paper we consider the composite self-concordant (CSC) minimization problem, which minimizes the sum of a self-concordant function f and a (possibly nonsmooth) proper closed convex function g. The CSC minimization is the cornerstone of the path-following interior point methods for solving a broad class of convex optimization problems. It has also found numerous applications in machine learning. The proximal damped Newton (PDN) methods have been well studied in the literature for solving this problem that enjoy a nice iteration complexity. Given that at each iteration these methods typically require evaluating or accessing the Hessian of f and also need to solve a proximal Newton subproblem, the cost per iteration can be prohibitively high when applied to large-scale problems. Inspired by the recent success of block coordinate descent methods, we propose a randomized block proximal damped Newton (RBPDN) method for solving the CSC minimization. Compared to the PDN methods, the computational cost per iteration of RBPDN is usually significantly lower. The computational experiment on a class of regularized logistic regression problems demonstrate that RBPDN is indeed promising in solving large-scale CSC minimization problems. The convergence of RBPDN is also analyzed in the paper. In particular, we show that RBPDN is globally convergent when g is Lipschitz continuous. It is also shown that RBPDN enjoys a local linear convergence. Moreover, we establish a global linear rate of convergence for a class of g including the case where g is smooth (but not necessarily self-concordant) and ∇g is Lipschitz continuous in a certain level set of f + g. As a consequence, we obtain a global linear rate of convergence for the classical damped Newton methods [Y. Nesterov, Introductory Lectures on Convex Optimization: A Basic Course, Kluwer, Boston, 2004; Y. Zhang and L. Xiao, preprint, arXiv:1501.00263, 2015] and the PDN [Q. Tran-Dinh, A. Kyrillidis, and V. Cevher, J. Mach. Learn. Res., 16 (2015), pp. 371–416] for such g, which was previously unknown in the literature. Moreover, this result can be used to sharpen the existing iteration complexity of these methods.

Original languageEnglish (US)
Pages (from-to)1910-1942
Number of pages33
JournalSIAM Journal on Optimization
Issue number3
StatePublished - 2017
Externally publishedYes

Bibliographical note

Funding Information:
∗Received by the editors July 1, 2016; accepted for publication (in revised form) June 7, 2017; published electronically September 6, 2017. http://www.siam.org/journals/siopt/27-3/M108276.html Funding: The work of this author was supported in part by an NSERC Discovery Grant. †Department of Mathematics, Simon Fraser University, Burnaby, BC V5A 1S6, Canada ([email protected]).

Publisher Copyright:
© 2017 Society for Industrial and Applied Mathematics


  • Composite self-concordant minimization
  • Damped Newton method
  • Proximal damped Newton method
  • Randomized block proximal damped Newton method


Dive into the research topics of 'Randomized block proximal damped Newton method for composite self-concordant minimization'. Together they form a unique fingerprint.

Cite this