Probabilistic matrix addition

Research output: Chapter in Book/Report/Conference proceedingConference contribution

7 Citations (Scopus)

Abstract

We introduce Probabilistic Matrix Addition (PMA) for modeling real-valued data matrices by simultaneously capturing covariance structure among rows and among columns. PMA additively combines two latent matrices drawn from two Gaussian Processes respectively over rows and columns. The resulting joint distribution over the observed matrix does not factorize over entries, rows, or columns, and can thus capture intricate dependencies in the matrix. Exact inference in PMA is possible, but involves inversion of large matrices, and can be computationally prohibitive. Efficient approximate inference is possible due to the sparse dependency structure among latent variables. We propose two families of approximate inference algorithms for PMA based on Gibbs sampling and MAP inference. We demonstrate the effectiveness of PMA for missing value prediction and multi-label classification problems.

Original languageEnglish (US)
Title of host publicationProceedings of the 28th International Conference on Machine Learning, ICML 2011
Pages1025-1032
Number of pages8
StatePublished - Oct 7 2011
Event28th International Conference on Machine Learning, ICML 2011 - Bellevue, WA, United States
Duration: Jun 28 2011Jul 2 2011

Publication series

NameProceedings of the 28th International Conference on Machine Learning, ICML 2011

Other

Other28th International Conference on Machine Learning, ICML 2011
CountryUnited States
CityBellevue, WA
Period6/28/117/2/11

Fingerprint

Values
Labels
Sampling

Cite this

Agovic, A., Banerjee, A., & Chatterjee, S. B. (2011). Probabilistic matrix addition. In Proceedings of the 28th International Conference on Machine Learning, ICML 2011 (pp. 1025-1032). (Proceedings of the 28th International Conference on Machine Learning, ICML 2011).

Probabilistic matrix addition. / Agovic, Amrudin; Banerjee, Arindam; Chatterjee, Singdhansu B.

Proceedings of the 28th International Conference on Machine Learning, ICML 2011. 2011. p. 1025-1032 (Proceedings of the 28th International Conference on Machine Learning, ICML 2011).

Research output: Chapter in Book/Report/Conference proceedingConference contribution

Agovic, A, Banerjee, A & Chatterjee, SB 2011, Probabilistic matrix addition. in Proceedings of the 28th International Conference on Machine Learning, ICML 2011. Proceedings of the 28th International Conference on Machine Learning, ICML 2011, pp. 1025-1032, 28th International Conference on Machine Learning, ICML 2011, Bellevue, WA, United States, 6/28/11.
Agovic A, Banerjee A, Chatterjee SB. Probabilistic matrix addition. In Proceedings of the 28th International Conference on Machine Learning, ICML 2011. 2011. p. 1025-1032. (Proceedings of the 28th International Conference on Machine Learning, ICML 2011).
Agovic, Amrudin ; Banerjee, Arindam ; Chatterjee, Singdhansu B. / Probabilistic matrix addition. Proceedings of the 28th International Conference on Machine Learning, ICML 2011. 2011. pp. 1025-1032 (Proceedings of the 28th International Conference on Machine Learning, ICML 2011).
@inproceedings{3c6f5aa12d144433b105acb0863d7565,
title = "Probabilistic matrix addition",
abstract = "We introduce Probabilistic Matrix Addition (PMA) for modeling real-valued data matrices by simultaneously capturing covariance structure among rows and among columns. PMA additively combines two latent matrices drawn from two Gaussian Processes respectively over rows and columns. The resulting joint distribution over the observed matrix does not factorize over entries, rows, or columns, and can thus capture intricate dependencies in the matrix. Exact inference in PMA is possible, but involves inversion of large matrices, and can be computationally prohibitive. Efficient approximate inference is possible due to the sparse dependency structure among latent variables. We propose two families of approximate inference algorithms for PMA based on Gibbs sampling and MAP inference. We demonstrate the effectiveness of PMA for missing value prediction and multi-label classification problems.",
author = "Amrudin Agovic and Arindam Banerjee and Chatterjee, {Singdhansu B}",
year = "2011",
month = "10",
day = "7",
language = "English (US)",
isbn = "9781450306195",
series = "Proceedings of the 28th International Conference on Machine Learning, ICML 2011",
pages = "1025--1032",
booktitle = "Proceedings of the 28th International Conference on Machine Learning, ICML 2011",

}

TY - GEN

T1 - Probabilistic matrix addition

AU - Agovic, Amrudin

AU - Banerjee, Arindam

AU - Chatterjee, Singdhansu B

PY - 2011/10/7

Y1 - 2011/10/7

N2 - We introduce Probabilistic Matrix Addition (PMA) for modeling real-valued data matrices by simultaneously capturing covariance structure among rows and among columns. PMA additively combines two latent matrices drawn from two Gaussian Processes respectively over rows and columns. The resulting joint distribution over the observed matrix does not factorize over entries, rows, or columns, and can thus capture intricate dependencies in the matrix. Exact inference in PMA is possible, but involves inversion of large matrices, and can be computationally prohibitive. Efficient approximate inference is possible due to the sparse dependency structure among latent variables. We propose two families of approximate inference algorithms for PMA based on Gibbs sampling and MAP inference. We demonstrate the effectiveness of PMA for missing value prediction and multi-label classification problems.

AB - We introduce Probabilistic Matrix Addition (PMA) for modeling real-valued data matrices by simultaneously capturing covariance structure among rows and among columns. PMA additively combines two latent matrices drawn from two Gaussian Processes respectively over rows and columns. The resulting joint distribution over the observed matrix does not factorize over entries, rows, or columns, and can thus capture intricate dependencies in the matrix. Exact inference in PMA is possible, but involves inversion of large matrices, and can be computationally prohibitive. Efficient approximate inference is possible due to the sparse dependency structure among latent variables. We propose two families of approximate inference algorithms for PMA based on Gibbs sampling and MAP inference. We demonstrate the effectiveness of PMA for missing value prediction and multi-label classification problems.

UR - http://www.scopus.com/inward/record.url?scp=80053449894&partnerID=8YFLogxK

UR - http://www.scopus.com/inward/citedby.url?scp=80053449894&partnerID=8YFLogxK

M3 - Conference contribution

AN - SCOPUS:80053449894

SN - 9781450306195

T3 - Proceedings of the 28th International Conference on Machine Learning, ICML 2011

SP - 1025

EP - 1032

BT - Proceedings of the 28th International Conference on Machine Learning, ICML 2011

ER -