Abstract
Multi-Layer Perceptron (MLP) network has been successfully applied to many practical problems because of its non-linear mapping ability. However, there are many factors, which may affect the generalization ability of MLP networks, such as the number of hidden units, the initial values of weights and the stopping rules. These factors, if improperly chosen, may result in poor generalization ability of MLP networks. It is important to identify these factors and their interaction in order to control effectively the generalization ability of MLP networks. In this paper, we have empirically identified the factors that affect the generalization ability of MLP network, and compared their relative effect on the generalization performance.
Original language | English (US) |
---|---|
Pages | 625-630 |
Number of pages | 6 |
State | Published - Dec 1 1999 |
Event | International Joint Conference on Neural Networks (IJCNN'99) - Washington, DC, USA Duration: Jul 10 1999 → Jul 16 1999 |
Other
Other | International Joint Conference on Neural Networks (IJCNN'99) |
---|---|
City | Washington, DC, USA |
Period | 7/10/99 → 7/16/99 |