Publication | Closed Access
Selecting the architecture of a class of back-propagation neural networks used as approximators
22
Citations
22
References
1997
Year
Artificial IntelligenceRecurrent Neural NetworkEngineeringMachine LearningEvolving Neural NetworkCellular Neural NetworkComputational NeuroscienceSparse Neural NetworkNeural NetworkComputer EngineeringComputer ScienceGood ApproximationsBack-propagation Neural NetworksBrain-like ComputingNeural Architecture SearchApproximation TheorySignal ProcessingPoor Approximations
Abstract This paper examines the architecture of back-propagation neural networks used as approximators by addressing the interrelationship between the number of training pairs and the number of input, output, and hidden layer nodes required for a good approximation. It concentrates on nets with an input layer, one hidden layer, and one output layer. It shows that many of the currently proposed schemes for selecting network architecture for such nets are deficient. It demonstrates in numerous examples that overdetermined neural networks tend to give good approximations over a region of interest, while underdetermined networks give approximations which can satisfy the training pairs but may give poor approximations over that region of interest. A scheme is presented that adjusts the number of hidden layer nodes in a neural network so as to give an overdetermined approximation. The advantages and disadvantages of using multiple output nodes are discussed. Guidelines for selecting the number of output nodes are presented.
| Year | Citations | |
|---|---|---|
Page 1
Page 1