A Majorization-Minimization Gauss-Newton Method for 1-Bit Matrix Completion

Xiaoqian Liu, Xu Han, Eric C. Chi, Boaz Nadler

Research output: Contribution to journalArticlepeer-review

Abstract

In 1-bit matrix completion, the aim is to estimate an underlying low-rank matrix from a partial set of binary observations. We propose a novel method for 1-bit matrix completion called Majorization-Minimization Gauss-Newton (MMGN). Our method is based on the majorization-minimization principle, which converts the original optimization problem into a sequence of standard low-rank matrix completion problems. We solve each of these sub-problems by a factorization approach that explicitly enforces the assumed low-rank structure and then apply a Gauss-Newton method. Using simulations and a real data example, we illustrate that in comparison to existing 1-bit matrix completion methods, MMGN outputs comparable if not more accurate estimates. In addition, it is often significantly faster, and less sensitive to the spikiness of the underlying matrix. In comparison with three standard generic optimization approaches that directly minimize the original objective, MMGN also exhibits a clear computational advantage, especially when the fraction of observed entries is small. Supplementary materials for this article are available online.

Original languageEnglish (US)
Pages (from-to)1017-1029
Number of pages13
JournalJournal of Computational and Graphical Statistics
Volume34
Issue number3
DOIs
StatePublished - 2025
Externally publishedYes

Bibliographical note

Publisher Copyright:
© 2025 The Author(s). Published with license by Taylor & Francis Group, LLC.

Keywords

  • Binary observations
  • Constrained least squares
  • Low-rank matrix
  • Maximum likelihood estimate

Fingerprint

Dive into the research topics of 'A Majorization-Minimization Gauss-Newton Method for 1-Bit Matrix Completion'. Together they form a unique fingerprint.

Cite this