### Abstract

We introduce Probabilistic Matrix Addition (PMA) for modeling real-valued data matrices by simultaneously capturing covariance structure among rows and among columns. PMA additively combines two latent matrices drawn from two Gaussian Processes respectively over rows and columns. The resulting joint distribution over the observed matrix does not factorize over entries, rows, or columns, and can thus capture intricate dependencies in the matrix. Exact inference in PMA is possible, but involves inversion of large matrices, and can be computationally prohibitive. Efficient approximate inference is possible due to the sparse dependency structure among latent variables. We propose two families of approximate inference algorithms for PMA based on Gibbs sampling and MAP inference. We demonstrate the effectiveness of PMA for missing value prediction and multi-label classification problems.

Original language | English (US) |
---|---|

Title of host publication | Proceedings of the 28th International Conference on Machine Learning, ICML 2011 |

Pages | 1025-1032 |

Number of pages | 8 |

State | Published - Oct 7 2011 |

Event | 28th International Conference on Machine Learning, ICML 2011 - Bellevue, WA, United States Duration: Jun 28 2011 → Jul 2 2011 |

### Publication series

Name | Proceedings of the 28th International Conference on Machine Learning, ICML 2011 |
---|

### Other

Other | 28th International Conference on Machine Learning, ICML 2011 |
---|---|

Country | United States |

City | Bellevue, WA |

Period | 6/28/11 → 7/2/11 |

### Fingerprint

### Cite this

*Proceedings of the 28th International Conference on Machine Learning, ICML 2011*(pp. 1025-1032). (Proceedings of the 28th International Conference on Machine Learning, ICML 2011).

**Probabilistic matrix addition.** / Agovic, Amrudin; Banerjee, Arindam; Chatterjee, Singdhansu B.

Research output: Chapter in Book/Report/Conference proceeding › Conference contribution

*Proceedings of the 28th International Conference on Machine Learning, ICML 2011.*Proceedings of the 28th International Conference on Machine Learning, ICML 2011, pp. 1025-1032, 28th International Conference on Machine Learning, ICML 2011, Bellevue, WA, United States, 6/28/11.

}

TY - GEN

T1 - Probabilistic matrix addition

AU - Agovic, Amrudin

AU - Banerjee, Arindam

AU - Chatterjee, Singdhansu B

PY - 2011/10/7

Y1 - 2011/10/7

N2 - We introduce Probabilistic Matrix Addition (PMA) for modeling real-valued data matrices by simultaneously capturing covariance structure among rows and among columns. PMA additively combines two latent matrices drawn from two Gaussian Processes respectively over rows and columns. The resulting joint distribution over the observed matrix does not factorize over entries, rows, or columns, and can thus capture intricate dependencies in the matrix. Exact inference in PMA is possible, but involves inversion of large matrices, and can be computationally prohibitive. Efficient approximate inference is possible due to the sparse dependency structure among latent variables. We propose two families of approximate inference algorithms for PMA based on Gibbs sampling and MAP inference. We demonstrate the effectiveness of PMA for missing value prediction and multi-label classification problems.

AB - We introduce Probabilistic Matrix Addition (PMA) for modeling real-valued data matrices by simultaneously capturing covariance structure among rows and among columns. PMA additively combines two latent matrices drawn from two Gaussian Processes respectively over rows and columns. The resulting joint distribution over the observed matrix does not factorize over entries, rows, or columns, and can thus capture intricate dependencies in the matrix. Exact inference in PMA is possible, but involves inversion of large matrices, and can be computationally prohibitive. Efficient approximate inference is possible due to the sparse dependency structure among latent variables. We propose two families of approximate inference algorithms for PMA based on Gibbs sampling and MAP inference. We demonstrate the effectiveness of PMA for missing value prediction and multi-label classification problems.

UR - http://www.scopus.com/inward/record.url?scp=80053449894&partnerID=8YFLogxK

UR - http://www.scopus.com/inward/citedby.url?scp=80053449894&partnerID=8YFLogxK

M3 - Conference contribution

AN - SCOPUS:80053449894

SN - 9781450306195

T3 - Proceedings of the 28th International Conference on Machine Learning, ICML 2011

SP - 1025

EP - 1032

BT - Proceedings of the 28th International Conference on Machine Learning, ICML 2011

ER -