Gaussian perceptron: Learning algorithms

Taek M Kwon, Michael E. Zervakis

Research output: Chapter in Book/Report/Conference proceedingConference contribution


The structure of most neural networks is based on node models with sigmoid-type activation functions. This paper presents a neural network structure whose nodes reflect a Gaussian-type activation function. Three learning algorithms are introduced and compared; the Gaussian perceptron learning (GPL) algorithm, which is based on the conventional perceptron convergence procedure; the least-squares error (LSE) algorithm, which follows the classical steepest descent approach; and the least-log-squares error (LLSE) algorithm, which is a gradient method on a log objective function. In particular, the convergence of the GPL algorithm is proved. The performance on each algorithm is demonstrated through benchmark problems.

Original languageEnglish (US)
Title of host publication1992 IEEE International Conference on Systems, Man, and Cybernetics
Subtitle of host publicationEmergent Innovations in Information Transfer Processing and Decision Making, SMC 1992
PublisherInstitute of Electrical and Electronics Engineers Inc.
Number of pages6
ISBN (Electronic)0780307208, 9780780307209
StatePublished - Jan 1 1992
EventIEEE International Conference on Systems, Man, and Cybernetics, SMC 1992 - Chicago, United States
Duration: Oct 18 1992Oct 21 1992

Publication series

NameConference Proceedings - IEEE International Conference on Systems, Man and Cybernetics
ISSN (Print)1062-922X


OtherIEEE International Conference on Systems, Man, and Cybernetics, SMC 1992
CountryUnited States

Fingerprint Dive into the research topics of 'Gaussian perceptron: Learning algorithms'. Together they form a unique fingerprint.

Cite this