Gaussian perceptron: Learning algorithms

Taek M Kwon, Michael E. Zervakis

Research output: Chapter in Book/Report/Conference proceedingConference contribution

Abstract

The structure of most neural networks is based on node models with sigmoid-type activation functions. This paper presents a neural network structure whose nodes reflect a Gaussian-type activation function. Three learning algorithms are introduced and compared; the Gaussian perceptron learning (GPL) algorithm, which is based on the conventional perceptron convergence procedure; the least-squares error (LSE) algorithm, which follows the classical steepest descent approach; and the least-log-squares error (LLSE) algorithm, which is a gradient method on a log objective function. In particular, the convergence of the GPL algorithm is proved. The performance on each algorithm is demonstrated through benchmark problems.

Original languageEnglish (US)
Title of host publication1992 IEEE International Conference on Systems, Man, and Cybernetics
Subtitle of host publicationEmergent Innovations in Information Transfer Processing and Decision Making, SMC 1992
PublisherInstitute of Electrical and Electronics Engineers Inc.
Pages105-110
Number of pages6
ISBN (Electronic)0780307208, 9780780307209
DOIs
StatePublished - Jan 1 1992
EventIEEE International Conference on Systems, Man, and Cybernetics, SMC 1992 - Chicago, United States
Duration: Oct 18 1992Oct 21 1992

Publication series

NameConference Proceedings - IEEE International Conference on Systems, Man and Cybernetics
Volume1992-January
ISSN (Print)1062-922X

Other

OtherIEEE International Conference on Systems, Man, and Cybernetics, SMC 1992
CountryUnited States
CityChicago
Period10/18/9210/21/92

Fingerprint Dive into the research topics of 'Gaussian perceptron: Learning algorithms'. Together they form a unique fingerprint.

Cite this