Publication | Closed Access
A Convergence Theorem for Sequential Learning in Two-Layer Perceptrons
136
Citations
5
References
1990
Year
Artificial IntelligenceIncremental LearningEngineeringMachine LearningData ScienceConvergence TheoremPattern RecognitionComputational Learning TheoryHidden UnitsUnspecified NumberKnowledge DiscoverySequential LearningConvergence AnalysisComputer ScienceBrain-like ComputingDeep LearningRecurrent Neural NetworkNi Input Units
We consider a perceptron with Ni input units, one output and a yet unspecified number of hidden units. This perceptron must be able to learn a given but arbitrary set of input-output examples. By sequential learning we mean that groups of patterns, pertaining to the same class, are sequentially separated from the rest by successively adding hidden units until the remaining patterns are all in the same class. We prove that the internal representations obtained by such procedures are linearly separable. Preliminary numerical tests of an algorithm implementing these ideas are presented and compare favourably with results of other growth algorithms.
| Year | Citations | |
|---|---|---|
Page 1
Page 1