Learning ReLU Networks on Linearly Separable Data: Algorithm, Optimality, and Generalization

Research output: Contribution to journalArticle

4 Citations (Scopus)

Abstract

Neural networks with rectified linear unit (ReLU) activation functions (a.k.a. ReLU networks) have achieved great empirical success in various domains. Nonetheless, existing results for learning ReLU networks either pose assumptions on the underlying data distribution being, e.g., Gaussian, or require the network size and/or training size to be sufficiently large. In this context, the problem of learning a two-layer ReLU network is approached in a binary classification setting, where the data are linearly separable and a hinge loss criterion is adopted. Leveraging the power of random noise perturbation, this paper presents a novel stochastic gradient descent (SGD) algorithm, which can provably train any single-hidden-layer ReLU network to attain global optimality, despite the presence of infinitely many bad local minima, maxima, and saddle points in general. This result is the first of its kind, requiring no assumptions on the data distribution, training/network size, or initialization. Convergence of the resultant iterative algorithm to a global minimum is analyzed by establishing both an upper bound and a lower bound on the number of non-zero updates to be performed. Moreover, generalization guarantees are developed for ReLU networks trained with the novel SGD leveraging classic compression bounds. These guarantees highlight a key difference (at least in the worst case) between reliably learning a ReLU network as well as a leaky ReLU network in terms of sample complexity. Numerical tests using both synthetic data and real images validate the effectiveness of the algorithm and the practical merits of the theory.

Original languageEnglish (US)
Article number8671751
Pages (from-to)2357-2370
Number of pages14
JournalIEEE Transactions on Signal Processing
Volume67
Issue number9
DOIs
StatePublished - May 1 2019

Fingerprint

Hinges
Chemical activation
Neural networks

Keywords

  • Deep learning
  • escaping local minima
  • generalization
  • global optimality
  • stochastic gradient descent

Cite this

Learning ReLU Networks on Linearly Separable Data : Algorithm, Optimality, and Generalization. / Wang, Gang; Giannakis, Georgios B; Chen, Jie.

In: IEEE Transactions on Signal Processing, Vol. 67, No. 9, 8671751, 01.05.2019, p. 2357-2370.

Research output: Contribution to journalArticle

@article{982c53bed51347f1a259604b176de840,
title = "Learning ReLU Networks on Linearly Separable Data: Algorithm, Optimality, and Generalization",
abstract = "Neural networks with rectified linear unit (ReLU) activation functions (a.k.a. ReLU networks) have achieved great empirical success in various domains. Nonetheless, existing results for learning ReLU networks either pose assumptions on the underlying data distribution being, e.g., Gaussian, or require the network size and/or training size to be sufficiently large. In this context, the problem of learning a two-layer ReLU network is approached in a binary classification setting, where the data are linearly separable and a hinge loss criterion is adopted. Leveraging the power of random noise perturbation, this paper presents a novel stochastic gradient descent (SGD) algorithm, which can provably train any single-hidden-layer ReLU network to attain global optimality, despite the presence of infinitely many bad local minima, maxima, and saddle points in general. This result is the first of its kind, requiring no assumptions on the data distribution, training/network size, or initialization. Convergence of the resultant iterative algorithm to a global minimum is analyzed by establishing both an upper bound and a lower bound on the number of non-zero updates to be performed. Moreover, generalization guarantees are developed for ReLU networks trained with the novel SGD leveraging classic compression bounds. These guarantees highlight a key difference (at least in the worst case) between reliably learning a ReLU network as well as a leaky ReLU network in terms of sample complexity. Numerical tests using both synthetic data and real images validate the effectiveness of the algorithm and the practical merits of the theory.",
keywords = "Deep learning, escaping local minima, generalization, global optimality, stochastic gradient descent",
author = "Gang Wang and Giannakis, {Georgios B} and Jie Chen",
year = "2019",
month = "5",
day = "1",
doi = "10.1109/TSP.2019.2904921",
language = "English (US)",
volume = "67",
pages = "2357--2370",
journal = "IEEE Transactions on Signal Processing",
issn = "1053-587X",
publisher = "Institute of Electrical and Electronics Engineers Inc.",
number = "9",

}

TY - JOUR

T1 - Learning ReLU Networks on Linearly Separable Data

T2 - Algorithm, Optimality, and Generalization

AU - Wang, Gang

AU - Giannakis, Georgios B

AU - Chen, Jie

PY - 2019/5/1

Y1 - 2019/5/1

N2 - Neural networks with rectified linear unit (ReLU) activation functions (a.k.a. ReLU networks) have achieved great empirical success in various domains. Nonetheless, existing results for learning ReLU networks either pose assumptions on the underlying data distribution being, e.g., Gaussian, or require the network size and/or training size to be sufficiently large. In this context, the problem of learning a two-layer ReLU network is approached in a binary classification setting, where the data are linearly separable and a hinge loss criterion is adopted. Leveraging the power of random noise perturbation, this paper presents a novel stochastic gradient descent (SGD) algorithm, which can provably train any single-hidden-layer ReLU network to attain global optimality, despite the presence of infinitely many bad local minima, maxima, and saddle points in general. This result is the first of its kind, requiring no assumptions on the data distribution, training/network size, or initialization. Convergence of the resultant iterative algorithm to a global minimum is analyzed by establishing both an upper bound and a lower bound on the number of non-zero updates to be performed. Moreover, generalization guarantees are developed for ReLU networks trained with the novel SGD leveraging classic compression bounds. These guarantees highlight a key difference (at least in the worst case) between reliably learning a ReLU network as well as a leaky ReLU network in terms of sample complexity. Numerical tests using both synthetic data and real images validate the effectiveness of the algorithm and the practical merits of the theory.

AB - Neural networks with rectified linear unit (ReLU) activation functions (a.k.a. ReLU networks) have achieved great empirical success in various domains. Nonetheless, existing results for learning ReLU networks either pose assumptions on the underlying data distribution being, e.g., Gaussian, or require the network size and/or training size to be sufficiently large. In this context, the problem of learning a two-layer ReLU network is approached in a binary classification setting, where the data are linearly separable and a hinge loss criterion is adopted. Leveraging the power of random noise perturbation, this paper presents a novel stochastic gradient descent (SGD) algorithm, which can provably train any single-hidden-layer ReLU network to attain global optimality, despite the presence of infinitely many bad local minima, maxima, and saddle points in general. This result is the first of its kind, requiring no assumptions on the data distribution, training/network size, or initialization. Convergence of the resultant iterative algorithm to a global minimum is analyzed by establishing both an upper bound and a lower bound on the number of non-zero updates to be performed. Moreover, generalization guarantees are developed for ReLU networks trained with the novel SGD leveraging classic compression bounds. These guarantees highlight a key difference (at least in the worst case) between reliably learning a ReLU network as well as a leaky ReLU network in terms of sample complexity. Numerical tests using both synthetic data and real images validate the effectiveness of the algorithm and the practical merits of the theory.

KW - Deep learning

KW - escaping local minima

KW - generalization

KW - global optimality

KW - stochastic gradient descent

UR - http://www.scopus.com/inward/record.url?scp=85064137028&partnerID=8YFLogxK

UR - http://www.scopus.com/inward/citedby.url?scp=85064137028&partnerID=8YFLogxK

U2 - 10.1109/TSP.2019.2904921

DO - 10.1109/TSP.2019.2904921

M3 - Article

AN - SCOPUS:85064137028

VL - 67

SP - 2357

EP - 2370

JO - IEEE Transactions on Signal Processing

JF - IEEE Transactions on Signal Processing

SN - 1053-587X

IS - 9

M1 - 8671751

ER -