REFINING AND REDEFINING THE BACK-PROPAGATION LEARNING RULE FOR CONNECTIONIST NETWORKS.

Research output: Chapter in Book/Report/Conference proceedingConference contribution

2 Scopus citations

Abstract

It is shown that there is general expression for back-propagation rules, in contrast to previous studies that have implicitly assumed that there is just one back-propagation rule. There are an infinite number of possible learning rules that obey the general expression. A number of other rules are presented, and, along with the original rule, applied to a small but difficult pattern mapping task with a three-layer network. Performance criteria investigated includes the dependence of the rate of learning on the learning rate parameter, eta , the information required by the learning rule, and the discreteness of the hidden-unit representations learned. The original algorithm is not the best by any of these criteria. It is suggested that the problems that have been observed with the original back-propagation learning rule, such as the deterioration of performance with very large networks, may not exist for some of the other rules.

Original languageEnglish (US)
Title of host publicationUnknown Host Publication Title
PublisherIEEE
Pages958-963
Number of pages6
StatePublished - Dec 1 1987

Fingerprint Dive into the research topics of 'REFINING AND REDEFINING THE BACK-PROPAGATION LEARNING RULE FOR CONNECTIONIST NETWORKS.'. Together they form a unique fingerprint.

  • Cite this

    Samad, T. (1987). REFINING AND REDEFINING THE BACK-PROPAGATION LEARNING RULE FOR CONNECTIONIST NETWORKS. In Unknown Host Publication Title (pp. 958-963). IEEE.