In
optimization
Mathematical optimization (alternatively spelled ''optimisation'') or mathematical programming is the selection of a best element, with regard to some criterion, from some set of available alternatives. It is generally divided into two subfi ...
, the line search strategy is one of two basic
iterative
Iteration is the repetition of a process in order to generate a (possibly unbounded) sequence of outcomes. Each repetition of the process is a single iteration, and the outcome of each iteration is then the starting point of the next iteration. ...
approaches to find a
local minimum
In mathematical analysis, the maxima and minima (the respective plurals of maximum and minimum) of a function, known collectively as extrema (the plural of extremum), are the largest and smallest value of the function, either within a given r ...
of an
objective function
In mathematical optimization and decision theory, a loss function or cost function (sometimes also called an error function) is a function that maps an event or values of one or more variables onto a real number intuitively representing some "cos ...
. The other approach is
trust region.
The line search approach first finds a
descent direction along which the objective function
will be reduced and then computes a step size that determines how far
should move along that direction. The descent direction can be computed by various methods, such as
gradient descent
In mathematics, gradient descent (also often called steepest descent) is a first-order iterative optimization algorithm for finding a local minimum of a differentiable function. The idea is to take repeated steps in the opposite direction of ...
or
quasi-Newton method. The step size can be determined either exactly or inexactly.
Example use
Here is an example gradient method that uses a line search in step 4.
# Set iteration counter
, and make an initial guess
for the minimum
# Repeat:
# Compute a
descent direction
# Choose
to 'loosely' minimize
over
# Update
, and
# Until
< tolerance
At the line search step (4) the algorithm might either ''exactly'' minimize ''h'', by solving
, or ''loosely'', by asking for a sufficient decrease in ''h''. One example of the former is
conjugate gradient method
In mathematics, the conjugate gradient method is an algorithm for the numerical solution of particular systems of linear equations, namely those whose matrix is positive-definite. The conjugate gradient method is often implemented as an iter ...
. The latter is called inexact line search and may be performed in a number of ways, such as a
backtracking line search or using the
Wolfe conditions In the unconstrained minimization problem, the Wolfe conditions are a set of inequalities for performing inexact line search, especially in quasi-Newton methods, first published by Philip Wolfe in 1969.
In these methods the idea is to find
::\mi ...
.
Like other optimization methods, line search may be combined with
simulated annealing
Simulated annealing (SA) is a probabilistic technique for approximating the global optimum of a given function. Specifically, it is a metaheuristic to approximate global optimization in a large search space for an optimization problem. ...
to allow it to jump over some
local minima
In mathematical analysis, the maxima and minima (the respective plurals of maximum and minimum) of a function, known collectively as extrema (the plural of extremum), are the largest and smallest value of the function, either within a given ...
.
Algorithms
Direct search methods
In this method, the minimum must first be bracketed, so the algorithm must identify points ''x''
1 and ''x''
2 such that the sought minimum lies between them. The interval is then divided by computing
at two internal points, ''x''
3 and ''x''
4, and rejecting whichever of the two outer points is not adjacent to that of ''x''
3 and ''x''
4 which has the lowest function value. In subsequent steps, only one extra internal point needs to be calculated. Of the various methods of dividing the interval,
golden section search
The golden-section search is a technique for finding an extremum (minimum or maximum) of a function inside a specified interval. For a strictly unimodal function with an extremum inside the interval, it will find that extremum, while for an inte ...
is particularly simple and effective, as the interval proportions are preserved regardless of how the search proceeds:
:
where
:
See also
*
Golden section search
The golden-section search is a technique for finding an extremum (minimum or maximum) of a function inside a specified interval. For a strictly unimodal function with an extremum inside the interval, it will find that extremum, while for an inte ...
*
Grid search
*
Learning rate
In machine learning and statistics, the learning rate is a Hyperparameter (machine learning), tuning parameter in an Mathematical optimization, optimization algorithm that determines the step size at each iteration while moving toward a minimum of ...
*
Pattern search (optimization)
Pattern search (also known as direct search, derivative-free search, or black-box search) is a family of numerical optimization methods that does not require a gradient. As a result, it can be used on functions that are not continuous or different ...
*
Secant method
In numerical analysis, the secant method is a root-finding algorithm that uses a succession of roots of secant lines to better approximate a root of a function ''f''. The secant method can be thought of as a finite-difference approximation ...
References
Further reading
*
*
*
{{DEFAULTSORT:Line Search
Optimization algorithms and methods