Abstract
Neural networks form a general purpose architecture for machine learning and parameter identification. The simplest neural network consists of a single hidden layer connected to a linear output layer. It is often assumed that the components of the hidden layer correspond to linearly independent functions, but proofs of this are only known for a few specialized classes of network activation functions. This paper shows that for wide class of activation functions, including most of the commonly used activation functions in neural network libraries, almost all choices of hidden layer parameters lead to linearly independent functions. These linear independence properties are then used to derive sufficient conditions for persistence of excitation, a condition commonly used to ensure parameter convergence in adaptive control.
Original language | English (US) |
---|---|
Title of host publication | 2022 IEEE 61st Conference on Decision and Control, CDC 2022 |
Publisher | Institute of Electrical and Electronics Engineers Inc. |
Pages | 3365-3370 |
Number of pages | 6 |
ISBN (Electronic) | 9781665467612 |
DOIs | |
State | Published - 2022 |
Externally published | Yes |
Event | 61st IEEE Conference on Decision and Control, CDC 2022 - Cancun, Mexico Duration: Dec 6 2022 → Dec 9 2022 |
Publication series
Name | Proceedings of the IEEE Conference on Decision and Control |
---|---|
Volume | 2022-December |
ISSN (Print) | 0743-1546 |
ISSN (Electronic) | 2576-2370 |
Conference
Conference | 61st IEEE Conference on Decision and Control, CDC 2022 |
---|---|
Country/Territory | Mexico |
City | Cancun |
Period | 12/6/22 → 12/9/22 |
Bibliographical note
Funding Information:This work was supported in part by NSF CMMI-2122856 A. Lamperski is with the department of Electrical and Computer Engineering, University of Minnesota, Minneapolis, MN 55455, USA
Publisher Copyright:
© 2022 IEEE.