Abstract
Neural networks with rectified linear unit (ReLU) activation functions (a.k.a. ReLU networks) have achieved great empirical success in various domains. Nonetheless, existing results for learning ReLU networks either pose assumptions on the underlying data distribution being, e.g., Gaussian, or require the network size and/or training size to be sufficiently large. In this context, the problem of learning a two-layer ReLU network is approached in a binary classification setting, where the data are linearly separable and a hinge loss criterion is adopted. Leveraging the power of random noise perturbation, this paper presents a novel stochastic gradient descent (SGD) algorithm, which can provably train any single-hidden-layer ReLU network to attain global optimality, despite the presence of infinitely many bad local minima, maxima, and saddle points in general. This result is the first of its kind, requiring no assumptions on the data distribution, training/network size, or initialization. Convergence of the resultant iterative algorithm to a global minimum is analyzed by establishing both an upper bound and a lower bound on the number of non-zero updates to be performed. Moreover, generalization guarantees are developed for ReLU networks trained with the novel SGD leveraging classic compression bounds. These guarantees highlight a key difference (at least in the worst case) between reliably learning a ReLU network as well as a leaky ReLU network in terms of sample complexity. Numerical tests using both synthetic data and real images validate the effectiveness of the algorithm and the practical merits of the theory.
Original language | English (US) |
---|---|
Article number | 8671751 |
Pages (from-to) | 2357-2370 |
Number of pages | 14 |
Journal | IEEE Transactions on Signal Processing |
Volume | 67 |
Issue number | 9 |
DOIs | |
State | Published - May 1 2019 |
Bibliographical note
Funding Information:The work of G. Wang and G. B. Giannakis was supported partially by the National Science Foundation under Grants 1500713, 1514056, 1505970, and 1711471. The work of J. Chen was in part supported by the National Natural Science Foundation of China under Grants U1509215 and 61621063, and in part by the Program for Changjiang Scholars and Innovative Research Team in University (IRT1208).
Funding Information:
Manuscript received August 10, 2018; revised December 17, 2018; accepted February 20, 2019. Date of publication March 20, 2019; date of current version April 1, 2019. The associate editor coordinating the review of this manuscript and approving it for publication was Prof. Sotirios Chatzis. The work of G. Wang and G. B. Giannakis was supported partially by the National Science Foundation under Grants 1500713, 1514056, 1505970, and 1711471. The work of J. Chen was in part supported by the National Natural Science Foundation of China under Grants U1509215 and 61621063, and in part by the Program for Changjiang Scholars and Innovative Research Team in University (IRT1208). (Corresponding author: Georgios B. Giannakis.) G. Wang and G. B. Giannakis are with the Digital Technology Center and the Department of Electrical and Computer Engineering, University of Minnesota, Minneapolis, MN 55455 USA (e-mail:,gangwang@umn.edu; georgios@umn. edu).
Publisher Copyright:
© 1991-2012 IEEE.
Keywords
- Deep learning
- escaping local minima
- generalization
- global optimality
- stochastic gradient descent