Abstract
This paper puts forth a novel algorithm, termed truncated generalized gradient flow (TGGF), to solve for x ∈ ℝn/Cn a system of m quadratic equations yi = |(ai, x)|2, i = 1, 2,⋯, m, which even for {ai ∈ ℝn/Cn}mi=1 random is known to be NP-hard in general. We prove that as soon as the number of equations m is on the order of the number of unknowns n, TGGF recovers the solution exactly (up to a global unimodular constant) with high probability and complexity growing linearly with the time required to read the data {(ai; yi)}mi=1. Specifically, TGGF proceeds in two stages: s1) A novel orthogonality-promoting initialization that is obtained with simple power iterations; and, s2) a refinement of the initial estimate by successive updates of scalable truncated generalized gradient iterations. The former is in sharp contrast to the existing spectral initializations, while the latter handles the rather challenging nonconvex and nonsmooth amplitude-based cost function. Empirical results demonstrate that: i) The novel orthogonality-promoting initialization method returns more accurate and robust estimates relative to its spectral counterparts; and, ii) even with the same initialization, our refinement/truncation outperforms Wirtinger-based alternatives, all corroborating the superior performance of TGGF over state-of-the-art algorithms.
Original language | English (US) |
---|---|
Pages (from-to) | 568-576 |
Number of pages | 9 |
Journal | Advances in Neural Information Processing Systems |
State | Published - 2016 |
Event | 30th Annual Conference on Neural Information Processing Systems, NIPS 2016 - Barcelona, Spain Duration: Dec 5 2016 → Dec 10 2016 |
Bibliographical note
Publisher Copyright:© 2016 NIPS Foundation - All Rights Reserved.