HOME

TheInfoList



OR:

In
mathematical optimization Mathematical optimization (alternatively spelled ''optimisation'') or mathematical programming is the selection of a best element, with regard to some criterion, from some set of available alternatives. It is generally divided into two subfi ...
, a trust region is the subset of the region of the
objective function In mathematical optimization and decision theory, a loss function or cost function (sometimes also called an error function) is a function that maps an event or values of one or more variables onto a real number intuitively representing some "cost ...
that is approximated using a model function (often a quadratic). If an adequate model of the objective function is found within the trust region, then the region is expanded; conversely, if the approximation is poor, then the region is contracted. The fit is evaluated by comparing the ratio of expected improvement from the model approximation with the actual improvement observed in the objective function. Simple thresholding of the ratio is used as the criterion for expansion and contraction—a model function is "trusted" only in the region where it provides a reasonable approximation. Trust-region methods are in some sense dual to line-search methods: trust-region methods first choose a step size (the size of the trust region) and then a step direction, while line-search methods first choose a step direction and then a step size. The general idea behind trust region methods is known by many names; the earliest use of the term seems to be by Sorensen (1982). A popular textbook by
Fletcher Fletcher may refer to: People * Fletcher (occupation), a person who fletches arrows, the origin of the surname * Fletcher (singer) (born 1994), American actress and singer-songwriter * Fletcher (surname) * Fletcher (given name) Places United ...
(1980) calls these algorithms restricted-step methods. Additionally, in an early foundational work on the method, Goldfeld,
Quandt Quandt is a surname. In particular, it may refer to members of the notable Quandt family: *Günther Quandt (1881–1954), German industrialist, founded an industrial empire that includes BMW and Altana *Harald Quandt (1921–1967), German industria ...
, and Trotter (1966) refer to it as quadratic hill-climbing.


Example

Conceptually, in the
Levenberg–Marquardt algorithm In mathematics and computing, the Levenberg–Marquardt algorithm (LMA or just LM), also known as the damped least-squares (DLS) method, is used to solve non-linear least squares problems. These minimization problems arise especially in least sq ...
, the objective function is iteratively approximated by a quadratic surface, then using a linear solver, the estimate is updated. This alone may not converge nicely if the initial guess is too far from the optimum. For this reason, the algorithm instead restricts each step, preventing it from stepping "too far". It operationalizes "too far" as follows. Rather than solving A \, \Delta x = b for \Delta x, it solves \big(A + \lambda \operatorname(A)\big) \, \Delta x = b, where \operatorname(A) is the diagonal matrix with the same diagonal as ''A'', and λ is a parameter that controls the trust-region size. Geometrically, this adds a paraboloid centered at \Delta x = 0 to the
quadratic form In mathematics, a quadratic form is a polynomial with terms all of degree two ("form" is another name for a homogeneous polynomial). For example, :4x^2 + 2xy - 3y^2 is a quadratic form in the variables and . The coefficients usually belong to ...
, resulting in a smaller step. The trick is to change the trust-region size (λ). At each iteration, the damped quadratic fit predicts a certain reduction in the cost function, \Delta f_\text, which we would expect to be a smaller reduction than the true reduction. Given \Delta x, we can evaluate : \Delta f_\text = f(x) - f(x + \Delta x). By looking at the ratio \Delta f_\text/\Delta f_\text, we can adjust the trust-region size. In general, we expect \Delta f_\text to be a bit smaller than \Delta f_\text, and so the ratio would be between, say, 0.25 and 0.5. If the ratio is more than 0.5, then we are damping the step too much, so expand the trust region (decrease λ) and iterate. If the ratio is smaller than 0.25, then the true function is diverging "too much" from the trust-region approximation, so shrink the trust region (increase λ) and try again.


References

* * Andrew R. Conn, Nicholas I. M. Gould, Philippe L. Toint
Trust-Region Methods (MPS-SIAM Series on Optimization)
. * Byrd, R. H, R. B. Schnabel, and G. A. Schultz.
A trust region algorithm for nonlinearly constrained optimization
, SIAM J. Numer. Anal., 24 (1987), pp. 1152–1170. * Yuan, Y.
A review of trust region algorithms for optimization
in ICIAM 99: Proceedings of the Fourth International Congress on Industrial & Applied Mathematics, Edinburgh, 2000 Oxford University Press, USA. * Yuan, Y.
Recent Advances in Trust Region Algorithms
, Math. Program., 2015


External links




Trust-region methods
{{Optimization algorithms Optimization algorithms and methods