Concepedia

Publication | Closed Access

RPROP - a fast adaptive learning algorithm

271

Citations

0

References

1993

Year

Abstract

In this paper, a new learning algorithm, RPROP, is proposed. To overcome the inherent disadvantages of the pure gradient-descent technique of the original backpropagation procedure, RPROP performs an adaptation of the weight update-values according to the behaviour of the errorfunction. The results of RPROP on several learning tasks are shown in comparison to other well-known adaptive learning algorithms. 1 Introduction Backpropagation is the most widely used algorithm for supervised learning with multilayered feed-forward networks. The basic idea of the backpropagation learning algorithm is the repeated application of the chain rule to compute the influence of each weight in the network with respect to an arbitrary errorfunction E [1]: @E @w ij = @E @a i @a i @net i @net i @w ij (1) where w ij is the weight from neuron j to neuron i, a i is the activation value and net i is the weighted sum of the inputs of neuron i. Once the partial derivative for each weight is known, the a...