A unified convergence analysis of block successive minimization methods for nonsmooth optimization

Meisam Razaviyayn, Mingyi Hong, Zhi Quan Luo

Research output: Contribution to journalArticlepeer-review

860 Scopus citations


The block coordinate descent (BCD) method is widely used for minimizing a continuous function f of several block variables. At each iteration of this method, a single block of variables is optimized, while the remaining variables are held fixed. To ensure the convergence of the BCD method, the subproblem of each block variable needs to be solved to its unique global optimal. Unfortunately, this requirement is often too restrictive for many practical scenarios. In this paper, we study an alternative inexact BCD approach which updates the variable blocks by successively minimizing a sequence of approximations of f which are either locally tight upper bounds of f or strictly convex local approximations of f. The main contributions of this work include the characterizations of the convergence conditions for a fairly wide class of such methods, especially for the cases where the objective functions are either nondifferentiable or nonconvex. Our results unify and extend the existing convergence results for many classical algorithms such as the BCD method, the difference of convex functions (DC) method, the expectation maximization (EM) algorithm, as well as the block forward-backward splitting algorithm, all of which are popular for large scale optimization problems involving big data.

Original languageEnglish (US)
Pages (from-to)1126-1153
Number of pages28
JournalSIAM Journal on Optimization
Issue number2
StatePublished - 2013


  • Block coordinate descent
  • Block successive upper-bound minimization
  • Successive convex approximation
  • Successive inner approximation


Dive into the research topics of 'A unified convergence analysis of block successive minimization methods for nonsmooth optimization'. Together they form a unique fingerprint.

Cite this