Memory-efficient parallel computation of tensor and matrix products for big tensor decomposition

Niranjay Ravindran, Nicholas D. Sidiropoulos, Shaden Smith, George Karypis

Research output: Chapter in Book/Report/Conference proceedingConference contribution

9 Scopus citations

Abstract

Low-rank tensor decomposition has many applications in signal processing and machine learning, and is becoming increasingly important for analyzing big data. A significant challenge is the computation of intermediate products which can be much larger than the final result of the computation, or even the original tensor. We propose a scheme that allows memory-efficient in-place updates of intermediate matrices. Motivated by recent advances in big tensor decomposition from multiple compressed replicas, we also consider the related problem of memory-efficient tensor compression. The resulting algorithms can be parallelized, and can exploit but do not require sparsity.

Original languageEnglish (US)
Title of host publicationConference Record of the 48th Asilomar Conference on Signals, Systems and Computers
EditorsMichael B. Matthews
PublisherIEEE Computer Society
Pages581-585
Number of pages5
ISBN (Electronic)9781479982974
DOIs
StatePublished - Apr 24 2015
Event48th Asilomar Conference on Signals, Systems and Computers, ACSSC 2015 - Pacific Grove, United States
Duration: Nov 2 2014Nov 5 2014

Publication series

NameConference Record - Asilomar Conference on Signals, Systems and Computers
Volume2015-April
ISSN (Print)1058-6393

Other

Other48th Asilomar Conference on Signals, Systems and Computers, ACSSC 2015
CountryUnited States
CityPacific Grove
Period11/2/1411/5/14

Fingerprint Dive into the research topics of 'Memory-efficient parallel computation of tensor and matrix products for big tensor decomposition'. Together they form a unique fingerprint.

Cite this