Natural evolution strategy
   HOME

TheInfoList



OR:

Natural evolution strategies (NES) are a family of
numerical optimization Mathematical optimization (alternatively spelled ''optimisation'') or mathematical programming is the selection of a best element, with regard to some criterion, from some set of available alternatives. It is generally divided into two subfi ...
algorithms for
black box In science, computing, and engineering, a black box is a system which can be viewed in terms of its inputs and outputs (or transfer characteristics), without any knowledge of its internal workings. Its implementation is "opaque" (black). The te ...
problems. Similar in spirit to
evolution strategies In computer science, an evolution strategy (ES) is an optimization technique based on ideas of evolution. It belongs to the general class of evolutionary computation or artificial evolution methodologies. History The 'evolution strategy' optimiza ...
, they iteratively update the (continuous) parameters of a ''search distribution'' by following the natural gradient towards higher expected fitness.


Method

The general procedure is as follows: the ''parameterized'' search distribution is used to produce a batch of search points, and the
fitness function {{no footnotes, date=May 2015 A fitness function is a particular type of objective function that is used to summarise, as a single figure of merit, how close a given design solution is to achieving the set aims. Fitness functions are used in geneti ...
is evaluated at each such point. The distribution’s parameters (which include ''strategy parameters'') allow the algorithm to adaptively capture the (local) structure of the fitness function. For example, in the case of a Gaussian distribution, this comprises the mean and the covariance matrix. From the samples, NES estimates a search gradient on the parameters towards higher expected fitness. NES then performs a gradient ascent step along the natural gradient, a second order method which, unlike the plain gradient, renormalizes the update with respect to uncertainty. This step is crucial, since it prevents oscillations, premature convergence, and undesired effects stemming from a given parameterization. The entire process reiterates until a stopping criterion is met. All members of the NES family operate based on the same principles. They differ in the type of probability distribution and the gradient approximation method used. Different search spaces require different search distributions; for example, in low dimensionality it can be highly beneficial to model the full covariance matrix. In high dimensions, on the other hand, a more scalable alternative is to limit the covariance to the
diagonal In geometry, a diagonal is a line segment joining two vertices of a polygon or polyhedron, when those vertices are not on the same edge. Informally, any sloping line is called diagonal. The word ''diagonal'' derives from the ancient Greek δΠ...
only. In addition, highly multi-modal search spaces may benefit from more heavy-tailed distributions (such as
Cauchy Baron Augustin-Louis Cauchy (, ; ; 21 August 178923 May 1857) was a French mathematician, engineer, and physicist who made pioneering contributions to several branches of mathematics, including mathematical analysis and continuum mechanics. He w ...
, as opposed to the Gaussian). A last distinction arises between distributions where we can analytically compute the natural gradient, and more general distributions where we need to estimate it from samples.


Search gradients

Let \theta denote the parameters of the search distribution \pi(x \,, \, \theta) and f(x) the fitness function evaluated at x. NES then pursues the objective of maximizing the ''expected fitness under the search distribution'' :: J(\theta) = \operatorname_\theta (x)= \int f(x) \; \pi(x \,, \, \theta) \; dx through
gradient ascent In mathematics, gradient descent (also often called steepest descent) is a first-order iterative optimization algorithm for finding a local minimum of a differentiable function. The idea is to take repeated steps in the opposite direction of the ...
. The gradient can be rewritten as :: \nabla_ J(\theta) = \nabla_ \int f(x) \; \pi(x \,, \, \theta) \; dx ::: = \int f(x) \; \nabla_ \pi(x \,, \, \theta) \; dx ::: = \int f(x) \; \nabla_ \pi(x \,, \, \theta) \; \frac \; dx ::: = \int \Big \, \theta) \Big\; \pi(x \,, \, \theta) \; dx ::: = \operatorname_\theta \left \, \theta)\right/math> that is, the expected value of f(x) times the log-derivatives at x. In practice, it is possible to use the
Monte Carlo Monte Carlo (; ; french: Monte-Carlo , or colloquially ''Monte-Carl'' ; lij, Munte Carlu ; ) is officially an administrative area of the Principality of Monaco, specifically the ward of Monte Carlo/Spélugues, where the Monte Carlo Casino is ...
approximation based on a finite number of \lambda samples ::\nabla_ J(\theta) \approx \frac \sum_^ f(x_k) \; \nabla_ \log\pi(x_k \,, \, \theta). Finally, the parameters of the search distribution can be updated iteratively ::\theta \leftarrow \theta + \eta \nabla_ J(\theta)


Natural gradient ascent

Instead of using the plain stochastic gradient for updates, NES follows the natural gradient, which has been shown to possess numerous advantages over the plain (''vanilla'') gradient, e.g.: * the gradient direction is independent of the parameterization of the search distribution * the updates magnitudes are automatically adjusted based on uncertainty, in turn speeding convergence on
plateaus In geology and physical geography, a plateau (; ; ), also called a high plain or a tableland, is an area of a highland consisting of flat terrain that is raised sharply above the surrounding area on at least one side. Often one or more sides ha ...
and ridges. The NES update is therefore ::\theta \leftarrow \theta + \eta \mathbf^\nabla_ J(\theta) , where \mathbf is the
Fisher information matrix In mathematical statistics, the Fisher information (sometimes simply called information) is a way of measuring the amount of information that an observable random variable ''X'' carries about an unknown parameter ''θ'' of a distribution that model ...
. The Fisher matrix can sometimes be computed exactly, otherwise it is estimated from samples, reusing the log-derivatives \nabla_\theta \log\pi(x, \theta).


Fitness shaping

NES utilizes
rank Rank is the relative position, value, worth, complexity, power, importance, authority, level, etc. of a person or object within a ranking, such as: Level or position in a hierarchical organization * Academic rank * Diplomatic rank * Hierarchy * ...
-based fitness shaping in order to render the algorithm more robust, and ''invariant'' under monotonically increasing transformations of the fitness function. For this purpose, the fitness of the population is transformed into a set of
utility As a topic of economics, utility is used to model worth or value. Its usage has evolved significantly over time. The term was introduced initially as a measure of pleasure or happiness as part of the theory of utilitarianism by moral philosoph ...
values u_1 \geq \dots \geq u_\lambda. Let x_i denote the ith best individual. Replacing fitness with utility, the gradient estimate becomes :: \nabla_ J (\theta) = \sum_^\lambda u_k \; \nabla_ \log\pi(x_k \,, \, \theta) . The choice of utility function is a free parameter of the algorithm.


Pseudocode

1 repeat 2 ''// is the population size'' 3 4 5 6 end 7 ''// based on rank'' 8 9 ''// or compute it exactly'' 10 ''// is the learning rate'' 11 until stopping criterion is met


See also

* Evolutionary computation * Covariance matrix adaptation evolution strategy (CMA-ES)


Bibliography

* D. Wierstra, T. Schaul, J. Peters and J. Schmidhuber (2008)
Natural Evolution Strategies
IEEE Congress on Evolutionary Computation (CEC). * Y. Sun, D. Wierstra, T. Schaul and J. Schmidhuber (2009)
Stochastic Search using the Natural Gradient
International Conference on Machine Learning (ICML). * T. Glasmachers, T. Schaul, Y. Sun, D. Wierstra and J. Schmidhuber (2010)
Exponential Natural Evolution Strategies
Genetic and Evolutionary Computation Conference (GECCO). * T. Schaul, T. Glasmachers and J. Schmidhuber (2011)
High Dimensions and Heavy Tails for Natural Evolution Strategies
Genetic and Evolutionary Computation Conference (GECCO). * T. Schaul (2012)
Natural Evolution Strategies Converge on Sphere Functions
Genetic and Evolutionary Computation Conference (GECCO).


External links



{{Evolutionary computation Evolutionary algorithms Stochastic optimization Optimization algorithms and methods Articles with example pseudocode