A fast unified algorithm for solving group-lasso penalize learning problems

Yi Yang, Hui Zou

Research output: Contribution to journalArticlepeer-review

146 Scopus citations


This paper concerns a class of group-lasso learning problems where the objective function is the sum of an empirical loss and the group-lasso penalty. For a class of loss function satisfying a quadratic majorization condition, we derive a unified algorithm called groupwise-majorization-descent (GMD) for efficiently computing the solution paths of the corresponding group-lasso penalized learning problem. GMD allows for general design matrices, without requiring the predictors to be group-wise orthonormal. As illustration examples, we develop concrete algorithms for solving the group-lasso penalized least squares and several group-lasso penalized large margin classifiers. These group-lasso models have been implemented in an R package gglasso publicly available from the Comprehensive R Archive Network (CRAN) at http://cran.r-project.org/web/packages/gglasso. On simulated and real data, gglasso consistently outperforms the existing software for computing the group-lasso that implements either the classical groupwise descent algorithm or Nesterov’s method.

Original languageEnglish (US)
Pages (from-to)1129-1141
Number of pages13
JournalStatistics and Computing
Issue number6
StatePublished - Nov 30 2015

Bibliographical note

Funding Information:
The authors thank the editor, an associate editor and two referees for their helpful comments and suggestions. This work is supported in part by NSF Grant DMS-08-46068.

Publisher Copyright:
© 2014, Springer Science+Business Media New York.


  • Group lasso
  • Groupwise descent
  • Large margin classifiers
  • MM principle
  • SLEP
  • grplasso


Dive into the research topics of 'A fast unified algorithm for solving group-lasso penalize learning problems'. Together they form a unique fingerprint.

Cite this