Publication | Closed Access
Deterministic Convergence of an Online Gradient Method for BP Neural Networks
157
Citations
21
References
2005
Year
Mathematical ProgrammingModel OptimizationEngineeringMachine LearningConvergence TheoremStochastic OptimizationComputational Learning TheoryOnline Gradient MethodOnline AlgorithmLarge Scale OptimizationOnline Gradient MethodsComputer ScienceBp Neural NetworksDeterministic ConvergenceConvergence AnalysisAdaptive Optimization
Online gradient methods are widely used for training feedforward neural networks. We prove in this paper a convergence theorem for an online gradient method with variable step size for backward propagation (BP) neural networks with a hidden layer. Unlike most of the convergence results that are of probabilistic and nonmonotone nature, the convergence result that we establish here has a deterministic and monotone nature.
| Year | Citations | |
|---|---|---|
Page 1
Page 1