Abstract
This work examines the problem of exact data interpolation via sparse (neuron count), infinitely wide, single hidden layer neural networks with leaky rectified linear unit activations. Using the atomic norm framework of [Chandrasekaran et al. 2012], we derive simple characterizations of the convex hulls of the corresponding atomic sets for this problem under several different constraints on the weights and biases of the network, thus obtaining equivalent convex formulations for these problems. A modest extension of our proposed framework to a binary classification problem is also presented. We explore the efficacy of the resulting formulations experimentally, and compare with networks trained via gradient descent.
Original language | English (US) |
---|---|
Article number | 9264658 |
Pages (from-to) | 2114-2118 |
Number of pages | 5 |
Journal | IEEE Signal Processing Letters |
Volume | 27 |
DOIs | |
State | Published - 2020 |
Bibliographical note
Funding Information:Manuscript received July 14, 2020; revised October 30, 2020; accepted November 1, 2020. Date of publication November 19, 2020; date of current version December 18, 2020. The work of Akshay Kumar was supported in part by the 3M Science and Technology Doctoral Fellowship. The associate editor coordinating the review of this manuscript and approving it for publication was Dr. Ananda S. Chowdhury. (Corresponding author: Jarvis Haupt.) The authors are with the Department of Electrical and Computer Engineering, University of Minnesota, Minneapolis, MN 55455 USA (e-mail: ku-mar511@umn.edu; jdhaupt@umn.edu).
Publisher Copyright:
© 1994-2012 IEEE.
Keywords
- Atomic norm
- binary classification
- convex optimization
- interpolation
- single hidden layer neural networks