Abstract
This paper examines a general class of noisy matrix completion tasks, where the goal is to estimate a matrix from observations obtained at a subset of its entries, each of which is subject to random noise or corruption. Our specific focus is on settings where the matrix to be estimated is well-approximated by a product of two (a priori unknown) matrices, one of which is sparse. Such structural models-referred to here as sparse factor models-have been widely used, for example, in subspace clustering applications, as well as in contemporary sparse modeling and dictionary learning tasks. Our main theoretical contributions are estimation error bounds for sparsity-regularized maximum likelihood estimators for the problems of this form, which are applicable to a number of different observation noise or corruption models. Several specific implications are examined, including scenarios where observations are corrupted by additive Gaussian noise or additive heavier-tailed (Laplace) noise, Poisson-distributed observations, and highly quantized (e.g., 1 b) observations. We also propose a simple algorithmic approach based on the alternating direction method of multipliers for these tasks, and provide experimental evidence to support our error analyses.
Original language | English (US) |
---|---|
Article number | 7445217 |
Pages (from-to) | 3636-3661 |
Number of pages | 26 |
Journal | IEEE Transactions on Information Theory |
Volume | 62 |
Issue number | 6 |
DOIs | |
State | Published - Jun 2016 |
Bibliographical note
Funding Information:J. Haupt was supported in part by the Division of Astronomical Sciences through NSF under Grant AST-1247885 and in part by the Defense Advanced Research Projects Agency within the Young Faculty Award under Grant N66001-14-1-4047.
Publisher Copyright:
© 2016 IEEE.
Keywords
- Penalized maximum likelihood estimation
- dictionary learning
- matrix completion
- subspace clustering