Parallel Algorithms for Constrained Tensor Factorization via Alternating Direction Method of Multipliers

Athanasios P. Liavas, Nicholas D. Sidiropoulos

Research output: Contribution to journalArticlepeer-review

59 Scopus citations


Tensor factorization has proven useful in a wide range of applications, from sensor array processing to communications, speech and audio signal processing, and machine learning. With few recent exceptions, all tensor factorization algorithms were originally developed for centralized, in-memory computation on a single machine; and the few that break away from this mold do not easily incorporate practically important constraints, such as non-negativity. A new constrained tensor factorization framework is proposed in this paper, building upon the Alternating Direction Method of Multipliers (ADMoM). It is shown that this simplifies computations, bypassing the need to solve constrained optimization problems in each iteration; and it naturally leads to distributed algorithms suitable for parallel implementation. This opens the door for many emerging big data-enabled applications. The methodology is exemplified using non-negativity as a baseline constraint, but the proposed framework can incorporate many other types of constraints. Numerical experiments are encouraging, indicating that ADMoM-based non-negative tensor factorization (NTF) has high potential as an alternative to state-of-the-art approaches.

Original languageEnglish (US)
Article number7152968
Pages (from-to)5450-5463
Number of pages14
JournalIEEE Transactions on Signal Processing
Issue number20
StatePublished - Oct 15 2015


  • PARAFAC model
  • Tensor decomposition
  • parallel algorithms

Fingerprint Dive into the research topics of 'Parallel Algorithms for Constrained Tensor Factorization via Alternating Direction Method of Multipliers'. Together they form a unique fingerprint.

Cite this