## Abstract

We study the least squares regression problem (Equation Presented) where ⊙ is a low-rank tensor, defined as ⊙ = Σ^{R}_{r}=1 θ_{1}^{(r)}o ⋯ o θ^{(r)}_{D}, for vectors θ^{(r)}_{d} ϵ ℝ^{pd} for all r ϵ [R] and d ϵ [D]. Here, o denotes the outer product of vectors, and A(⊙) is a linear function on ⊙. This problem is motivated by the fact that the number of parameters in ⊙ is only R · Σ^{D}_{d=1}pd, which is significantly smaller than the Π^{D}_{d=1}pd number of parameters in ordinary least squares regression. We consider the above CP decomposition model of tensors ⊙, as well as the Tucker decomposition. For both models we show how to apply data dimensionality reduction techniques based on sparse random projections Φ ϵ ℝ^{m×n} with m << n, to reduce the problem to a much smaller problem min ⊙ ∥ΦA(⊙) - Φb∥^{2}_{2}, for which ∥ΦA(⊙)-Φb∥^{2}_{2} = (1±ϵ)∥A(⊙)-b∥^{2}_{2} holds simultaneously for all ⊙. We obtain a significantly smaller dimension and sparsity in the randomized linear mapping Φ than is possible for ordinary least squares regression. Finally, we give a number of numerical simulations supporting our theory.

Original language | English (US) |
---|---|

Pages (from-to) | 3467-3477 |

Number of pages | 11 |

Journal | Advances in Neural Information Processing Systems |

Volume | 2017-December |

State | Published - Jan 1 2017 |

Event | 31st Annual Conference on Neural Information Processing Systems, NIPS 2017 - Long Beach, United States Duration: Dec 4 2017 → Dec 9 2017 |