HOME

TheInfoList



OR:

Rprop, short for resilient
backpropagation In machine learning, backpropagation (backprop, BP) is a widely used algorithm for training feedforward artificial neural networks. Generalizations of backpropagation exist for other artificial neural networks (ANNs), and for functions gener ...
, is a learning
heuristic A heuristic (; ), or heuristic technique, is any approach to problem solving or self-discovery that employs a practical method that is not guaranteed to be optimal, perfect, or rational, but is nevertheless sufficient for reaching an immediat ...
for
supervised learning Supervised learning (SL) is a machine learning paradigm for problems where the available data consists of labelled examples, meaning that each data point contains features (covariates) and an associated label. The goal of supervised learning alg ...
in
feedforward Feedforward is the provision of context of what one wants to communicate prior to that communication. In purposeful activity, feedforward creates an expectation which the actor anticipates. When expected experience occurs, this provides confirmato ...
artificial neural network Artificial neural networks (ANNs), usually simply called neural networks (NNs) or neural nets, are computing systems inspired by the biological neural networks that constitute animal brains. An ANN is based on a collection of connected units ...
s. This is a first-order
optimization Mathematical optimization (alternatively spelled ''optimisation'') or mathematical programming is the selection of a best element, with regard to some criterion, from some set of available alternatives. It is generally divided into two subfi ...
algorithm In mathematics and computer science, an algorithm () is a finite sequence of rigorous instructions, typically used to solve a class of specific problems or to perform a computation. Algorithms are used as specifications for performing ...
. This algorithm was created by Martin Riedmiller and Heinrich Braun in 1992. Similarly to the Manhattan update rule, Rprop takes into account only the
sign A sign is an Physical object, object, quality (philosophy), quality, event, or Non-physical entity, entity whose presence or occurrence indicates the probable presence or occurrence of something else. A natural sign bears a causal relation to ...
of the
partial derivative In mathematics, a partial derivative of a function of several variables is its derivative with respect to one of those variables, with the others held constant (as opposed to the total derivative, in which all variables are allowed to vary). Pa ...
over all patterns (not the magnitude), and acts independently on each "weight". For each weight, if there was a sign change of the partial derivative of the total error function compared to the last iteration, the update value for that weight is multiplied by a factor ''η'', where ''η'' < 1. If the last iteration produced the same sign, the update value is multiplied by a factor of ''η''+, where ''η''+ > 1. The update values are calculated for each weight in the above manner, and finally each weight is changed by its own update value, in the opposite direction of that weight's partial derivative, so as to minimise the total error function. ''η''+ is empirically set to 1.2 and ''η'' to 0.5. RPROP is a batch update algorithm. Next to the
cascade correlation algorithm Cascade, Cascades or Cascading may refer to: Science and technology Science * Cascade waterfalls, or series of waterfalls * Cascade, the CRISPR-associated complex for antiviral defense (a protein complex) * Cascade (grape), a type of fruit * Bio ...
and the Levenberg–Marquardt algorithm, Rprop is one of the fastest weight update mechanisms.


Variations

Martin Riedmiller developed three algorithms, all named RPROP. Igel and Hüsken assigned names to them and added a new variant:Christian Igel and Michael Hüsken
Improving the Rprop Learning Algorithm
Second International Symposium on Neural Computation (NC 2000), pp. 115-121, ICSC Academic Press, 2000
Christian Igel and Michael Hüsken
Empirical Evaluation of the Improved Rprop Learning Algorithm
Neurocomputing 50:105-123, 2003
# RPROP+ is defined a
A Direct Adaptive Method for Faster Backpropagation Learning: The RPROP Algorithm
# RPROP− is defined a
Advanced Supervised Learning in Multi-layer Perceptrons – From Backpropagation to Adaptive Learning Algorithms
Backtracking is removed from RPROP+. # iRPROP− is defined i
Rprop – Description and Implementation Details
ref>Martin Riedmiller
Rprop – Description and Implementation Details
Technical report, 1994 and was reinvented by Igel and Hüsken. This variant is very popular and most simple. # iRPROP+ is defined a
Improving the Rprop Learning Algorithm
and is very robust and typically faster than the other three variants.


References

{{Reflist


External links


Rprop Optimization ToolboxRprop training for Neural Networks in MATLAB
Artificial neural networks Machine learning algorithms