Publication | Closed Access
Learning algorithms for oscillatory networks with gap junctions and membrane currents
27
Citations
17
References
1991
Year
EngineeringMachine LearningLearning AlgorithmNetwork AnalysisNetwork ModelMembrane CurrentsRecurrent Neural NetworkNetwork DynamicPhysic Aware Machine LearningPhysicsComputer EngineeringComputer ScienceDeep LearningNeural Architecture SearchNetwork TheoryModel OptimizationNetwork ScienceGap JunctionsCellular Neural NetworkComputational NeuroscienceNeuronal NetworkHigh-dimensional NetworkOscillatory Networks
AbstractOne of the most important problems for studying neural network models is the adjustment of parameters. The authors show how to formulate the problem as the minimization of the difference between two limit cycles. The backpropagation method for learning algorithms is described as the application of gradient descent to an error function that computes this difference. A mathematical formulation is given that is applicable to any type of network model, and is applied to several models. For example, when learning in a network in which all cells have a common, adjustable, bias current, the value of the bias ie adjusted at a rate proportional to the difference between the sum of the target outputs and the sum of the actual outputs. When learning in a network of n cells where a target output is given for every cell, the learning algorithm splits into n indepndent leaming algorithms, one per cell. For networks containing gap junctions, a gap junction is modelled as a conductance times the potential difference between the two adjacent cells. The requirement that a conductme g must be positive is enforced by replacing g by a function pos(g*) whose value is always positive, for example cxp(O.lg*), and deriving an algorithm that adjusts the parameta g* in place of g. When target output is specified for every cell in a network with gap junctions. the learning algorithm splits into fewer independent componeds, one for each gap-connected subset of the network. The learning algorithm for for a gap-connected set of cells cannot be parallelized further. As a final example, a learning algorithm is derived for a mutually inhibitory twocell network in which each cell has a membrane current. This generalized approach to backpropagation allows one to derive a learning algorithm for almost any model neural network given in terms of differential equations. It will be an essential tool for adjusting parmeters in small but complex network models.
| Year | Citations | |
|---|---|---|
Page 1
Page 1