Abstract
We study the least squares regression problem (Equation Presented) where ⊙ is a low-rank tensor, defined as ⊙ = ΣRr=1 θ1(r)o ⋯ o θ(r)D, for vectors θ(r)d ϵ ℝpd for all r ϵ [R] and d ϵ [D]. Here, o denotes the outer product of vectors, and A(⊙) is a linear function on ⊙. This problem is motivated by the fact that the number of parameters in ⊙ is only R · ΣDd=1pd, which is significantly smaller than the ΠDd=1pd number of parameters in ordinary least squares regression. We consider the above CP decomposition model of tensors ⊙, as well as the Tucker decomposition. For both models we show how to apply data dimensionality reduction techniques based on sparse random projections Φ ϵ ℝm×n with m << n, to reduce the problem to a much smaller problem min ⊙ ∥ΦA(⊙) - Φb∥22, for which ∥ΦA(⊙)-Φb∥22 = (1±ϵ)∥A(⊙)-b∥22 holds simultaneously for all ⊙. We obtain a significantly smaller dimension and sparsity in the randomized linear mapping Φ than is possible for ordinary least squares regression. Finally, we give a number of numerical simulations supporting our theory.
Original language | English (US) |
---|---|
Pages (from-to) | 3467-3477 |
Number of pages | 11 |
Journal | Advances in Neural Information Processing Systems |
Volume | 2017-December |
State | Published - 2017 |
Event | 31st Annual Conference on Neural Information Processing Systems, NIPS 2017 - Long Beach, United States Duration: Dec 4 2017 → Dec 9 2017 |
Bibliographical note
Publisher Copyright:© 2017 Neural information processing systems foundation. All rights reserved.