Publication | Closed Access
Accelerated learning in layered neural networks
211
Citations
0
References
1988
Year
Model OptimizationAbst RactMini MizEngineeringMachine LearningComputational Learning TheorySparse Neural NetworkLarge Scale OptimizationLayered Neural NetworksComputer ScienceDeep LearningNeural Architecture SearchError Function
Abst ract . Learning in layered neu ral networks is posed as the mini miz at ion of an error function defined over t he training set. A proba bilistic interpretation of the target act ivities sugges ts th e use of rela t ive entro py as an error measure. We investigate t he merits of using this error function over t he traditional quad ratic function for gradient descent learni ng. Com parative numerical sim ulations for the conrf guity problem show marked redu ct ion s in learn ing t imes. This im provement is explained in terms of the characteristic steepness of the landscape defined by the error function in configuration space.