Broyden–Fletcher–Goldfarb–Shanno algorithm
   HOME

TheInfoList



OR:

In numerical
optimization Mathematical optimization (alternatively spelled ''optimisation'') or mathematical programming is the selection of a best element, with regard to some criterion, from some set of available alternatives. It is generally divided into two subfi ...
, the Broyden–Fletcher–Goldfarb–Shanno (BFGS) algorithm is an
iterative method In computational mathematics, an iterative method is a mathematical procedure that uses an initial value to generate a sequence of improving approximate solutions for a class of problems, in which the ''n''-th approximation is derived from the pr ...
for solving unconstrained
nonlinear optimization In mathematics, nonlinear programming (NLP) is the process of solving an optimization problem where some of the constraints or the objective function are nonlinear. An optimization problem is one of calculation of the extrema (maxima, minima or sta ...
problems. Like the related Davidon–Fletcher–Powell method, BFGS determines the descent direction by
preconditioning In mathematics, preconditioning is the application of a transformation, called the preconditioner, that conditions a given problem into a form that is more suitable for numerical solving methods. Preconditioning is typically related to reducin ...
the
gradient In vector calculus, the gradient of a scalar-valued differentiable function of several variables is the vector field (or vector-valued function) \nabla f whose value at a point p is the "direction and rate of fastest increase". If the gr ...
with curvature information. It does so by gradually improving an approximation to the
Hessian matrix In mathematics, the Hessian matrix or Hessian is a square matrix of second-order partial derivatives of a scalar-valued function, or scalar field. It describes the local curvature of a function of many variables. The Hessian matrix was developed ...
of the loss function, obtained only from gradient evaluations (or approximate gradient evaluations) via a generalized
secant method In numerical analysis, the secant method is a root-finding algorithm that uses a succession of roots of secant lines to better approximate a root of a function ''f''. The secant method can be thought of as a finite-difference approximation o ...
. Since the updates of the BFGS curvature matrix do not require
matrix inversion In linear algebra, an -by- square matrix is called invertible (also nonsingular or nondegenerate), if there exists an -by- square matrix such that :\mathbf = \mathbf = \mathbf_n \ where denotes the -by- identity matrix and the multiplicati ...
, its computational complexity is only \mathcal(n^), compared to \mathcal(n^) in Newton's method. Also in common use is
L-BFGS Limited-memory BFGS (L-BFGS or LM-BFGS) is an optimization algorithm in the family of quasi-Newton methods that approximates the Broyden–Fletcher–Goldfarb–Shanno algorithm (BFGS) using a limited amount of computer memory. It is a popular algo ...
, which is a limited-memory version of BFGS that is particularly suited to problems with very large numbers of variables (e.g., >1000). The BFGS-B variant handles simple box constraints. The algorithm is named after
Charles George Broyden Charles George Broyden (3 February 1933 – 20 May 2011) was a mathematician who specialized in optimization problems and numerical linear algebra. While a physicist working at English Electric Company from 1961–1965, he adapted the Davi ...
, Roger Fletcher, Donald Goldfarb and David Shanno.


Rationale

The optimization problem is to minimize f(\mathbf), where \mathbf is a vector in \mathbb^n, and f is a differentiable scalar function. There are no constraints on the values that \mathbf can take. The algorithm begins at an initial estimate for the optimal value \mathbf_0 and proceeds iteratively to get a better estimate at each stage. The search direction p''k'' at stage ''k'' is given by the solution of the analogue of the Newton equation: :B_k \mathbf_k = -\nabla f(\mathbf_k), where B_k is an approximation to the
Hessian matrix In mathematics, the Hessian matrix or Hessian is a square matrix of second-order partial derivatives of a scalar-valued function, or scalar field. It describes the local curvature of a function of many variables. The Hessian matrix was developed ...
, which is updated iteratively at each stage, and \nabla f(\mathbf_k) is the gradient of the function evaluated at x''k''. A
line search In optimization, the line search strategy is one of two basic iterative approaches to find a local minimum \mathbf^* of an objective function f:\mathbb R^n\to\mathbb R. The other approach is trust region. The line search approach first finds a ...
in the direction p''k'' is then used to find the next point x''k''+1 by minimizing f(\mathbf_k + \gamma\mathbf_k) over the scalar \gamma > 0. The quasi-Newton condition imposed on the update of B_k is :B_ (\mathbf_ - \mathbf_k) = \nabla f(\mathbf_) - \nabla f(\mathbf_k). Let \mathbf_k = \nabla f(\mathbf_) - \nabla f(\mathbf_k) and \mathbf_k = \mathbf_ - \mathbf_k, then B_ satisfies :B_ \mathbf_k = \mathbf_k, which is the secant equation. The curvature condition \mathbf_k^\top \mathbf_k > 0 should be satisfied for B_ to be positive definite, which can be verified by pre-multiplying the secant equation with \mathbf_k^T. If the function is not strongly convex, then the condition has to be enforced explicitly e.g. by finding a point x''k''+1 satisfying the
Wolfe conditions In the unconstrained minimization problem, the Wolfe conditions are a set of inequalities for performing inexact line search, especially in quasi-Newton methods, first published by Philip Wolfe in 1969. In these methods the idea is to find ::\mi ...
, which entail the curvature condition, using line search. Instead of requiring the full Hessian matrix at the point \mathbf_ to be computed as B_, the approximate Hessian at stage ''k'' is updated by the addition of two matrices: :B_ = B_k + U_k + V_k. Both U_k and V_k are symmetric rank-one matrices, but their sum is a rank-two update matrix. BFGS and DFP updating matrix both differ from its predecessor by a rank-two matrix. Another simpler rank-one method is known as symmetric rank-one method, which does not guarantee the
positive definiteness In mathematics, positive definiteness is a property of any object to which a bilinear form or a sesquilinear form may be naturally associated, which is positive-definite. See, in particular: * Positive-definite bilinear form * Positive-definite f ...
. In order to maintain the symmetry and positive definiteness of B_, the update form can be chosen as B_ = B_k + \alpha\mathbf\mathbf^\top + \beta\mathbf\mathbf^\top. Imposing the secant condition, B_ \mathbf_k = \mathbf_k . Choosing \mathbf = \mathbf_k and \mathbf = B_k \mathbf_k, we can obtain: :\alpha = \frac, :\beta = -\frac. Finally, we substitute \alpha and \beta into B_ = B_k + \alpha\mathbf\mathbf^\top + \beta\mathbf\mathbf^\top and get the update equation of B_: :B_ = B_k + \frac - \frac.


Algorithm

From an initial guess \mathbf_0 and an approximate Hessian matrix B_0 the following steps are repeated as \mathbf_k converges to the solution: # Obtain a direction \mathbf_k by solving B_k \mathbf_k = -\nabla f(\mathbf_k). # Perform a one-dimensional optimization (
line search In optimization, the line search strategy is one of two basic iterative approaches to find a local minimum \mathbf^* of an objective function f:\mathbb R^n\to\mathbb R. The other approach is trust region. The line search approach first finds a ...
) to find an acceptable stepsize \alpha_k in the direction found in the first step. If an exact line search is performed, then \alpha_k=\arg \min f(\mathbf_k+\alpha\mathbf_k) . In practice, an inexact line search usually suffices, with an acceptable \alpha_k satisfying
Wolfe conditions In the unconstrained minimization problem, the Wolfe conditions are a set of inequalities for performing inexact line search, especially in quasi-Newton methods, first published by Philip Wolfe in 1969. In these methods the idea is to find ::\mi ...
. # Set \mathbf_k = \alpha_k \mathbf_k and update \mathbf_ = \mathbf_k + \mathbf_k. # \mathbf_k = . # B_ = B_k + \frac - \frac. f(\mathbf) denotes the objective function to be minimized. Convergence can be checked by observing the norm of the gradient, , , \nabla f(\mathbf_k), , . If B_0 is initialized with B_0 = I, the first step will be equivalent to a
gradient descent In mathematics, gradient descent (also often called steepest descent) is a first-order iterative optimization algorithm for finding a local minimum of a differentiable function. The idea is to take repeated steps in the opposite direction of the ...
, but further steps are more and more refined by B_, the approximation to the Hessian. The first step of the algorithm is carried out using the inverse of the matrix B_k, which can be obtained efficiently by applying the
Sherman–Morrison formula In mathematics, in particular linear algebra, the Sherman–Morrison formula, named after Jack Sherman and Winifred J. Morrison, computes the inverse of the sum of an invertible matrix A and the outer product, u v^\textsf, of vectors u and v. Th ...
to the step 5 of the algorithm, giving : B_^ = \left(I - \frac \right) B_^ \left(I - \frac \right) + \frac. This can be computed efficiently without temporary matrices, recognizing that B_k^ is symmetric, and that \mathbf_k^ B_k^ \mathbf_k and \mathbf_k^ \mathbf_k are scalars, using an expansion such as : B_^ = B_k^ + \frac - \frac. Therefore, in order to avoid any matrix inversion, the inverse of the Hessian can be approximated instead of the Hessian itself: H_k \overset B_k^. From an initial guess \mathbf_0 and an approximate inverted Hessian matrix H_0 the following steps are repeated as \mathbf_k converges to the solution: # Obtain a direction \mathbf_k by solving \mathbf_k = -H_k \nabla f(\mathbf_k). # Perform a one-dimensional optimization (
line search In optimization, the line search strategy is one of two basic iterative approaches to find a local minimum \mathbf^* of an objective function f:\mathbb R^n\to\mathbb R. The other approach is trust region. The line search approach first finds a ...
) to find an acceptable stepsize \alpha_k in the direction found in the first step. If an exact line search is performed, then \alpha_k=\arg \min f(\mathbf_k+\alpha\mathbf_k) . In practice, an inexact line search usually suffices, with an acceptable \alpha_k satisfying
Wolfe conditions In the unconstrained minimization problem, the Wolfe conditions are a set of inequalities for performing inexact line search, especially in quasi-Newton methods, first published by Philip Wolfe in 1969. In these methods the idea is to find ::\mi ...
. # Set \mathbf_k = \alpha_k \mathbf_k and update \mathbf_ = \mathbf_k + \mathbf_k. # \mathbf_k = . # H_ = H_k + \frac - \frac. In statistical estimation problems (such as
maximum likelihood In statistics, maximum likelihood estimation (MLE) is a method of estimating the parameters of an assumed probability distribution, given some observed data. This is achieved by maximizing a likelihood function so that, under the assumed stat ...
or Bayesian inference),
credible interval In Bayesian statistics, a credible interval is an interval within which an unobserved parameter value falls with a particular probability. It is an interval in the domain of a posterior probability distribution or a predictive distribution. T ...
s or confidence intervals for the solution can be estimated from the inverse of the final Hessian matrix . However, these quantities are technically defined by the true Hessian matrix, and the BFGS approximation may not converge to the true Hessian matrix.


Notable implementations

Notable open source implementations are: *
ALGLIB ALGLIB is a cross-platform open source numerical analysis and data processing library. It can be used from several programming languages (C++, C#, VB.NET, Python, Delphi). ALGLIB started in 1999 and has a long history of steady development wi ...
implements BFGS and its limited-memory version in C++ and C# *
GNU Octave GNU Octave is a high-level programming language primarily intended for scientific computing and numerical computation. Octave helps in solving linear and nonlinear problems numerically, and for performing other numerical experiments using a langu ...
uses a form of BFGS in its fsolve function, with
trust region In mathematical optimization, a trust region is the subset of the region of the objective function that is approximated using a model function (often a quadratic). If an adequate model of the objective function is found within the trust region, the ...
extensions. * The GSL implements BFGS as gsl_multimin_fdfminimizer_vector_bfgs2. * In R, the BFGS algorithm (and the L-BFGS-B version that allows box constraints) is implemented as an option of the base function optim(). * In
SciPy SciPy (pronounced "sigh pie") is a free and open-source Python library used for scientific computing and technical computing. SciPy contains modules for optimization, linear algebra, integration, interpolation, special functions, FFT, ...
, the scipy.optimize.fmin_bfgs function implements BFGS. It is also possible to run BFGS using any of the
L-BFGS Limited-memory BFGS (L-BFGS or LM-BFGS) is an optimization algorithm in the family of quasi-Newton methods that approximates the Broyden–Fletcher–Goldfarb–Shanno algorithm (BFGS) using a limited amount of computer memory. It is a popular algo ...
algorithms by setting the parameter L to a very large number. * In
Julia Julia is usually a feminine given name. It is a Latinate feminine form of the name Julio and Julius. (For further details on etymology, see the Wiktionary entry "Julius".) The given name ''Julia'' had been in use throughout Late Antiquity (e.g ...
, th
Optim.jl
package implements BFGS and L-BFGS as a solver option to the optimize() function (among other options). Notable proprietary implementations include: * The large scale nonlinear optimization software Artelys Knitro implements, among others, both BFGS and L-BFGS algorithms. * In the MATLAB Optimization Toolbox, the fminunc function uses BFGS with cubic line search when the problem size is set to "medium scale." * Mathematica includes BFGS.


See also

* BHHH algorithm *
Davidon–Fletcher–Powell formula The Davidon–Fletcher–Powell formula (or DFP; named after William C. Davidon, Roger Fletcher, and Michael J. D. Powell) finds the solution to the secant equation that is closest to the current estimate and satisfies the curvature condition. It ...
*
Gradient descent In mathematics, gradient descent (also often called steepest descent) is a first-order iterative optimization algorithm for finding a local minimum of a differentiable function. The idea is to take repeated steps in the opposite direction of the ...
*
L-BFGS Limited-memory BFGS (L-BFGS or LM-BFGS) is an optimization algorithm in the family of quasi-Newton methods that approximates the Broyden–Fletcher–Goldfarb–Shanno algorithm (BFGS) using a limited amount of computer memory. It is a popular algo ...
*
Levenberg–Marquardt algorithm In mathematics and computing, the Levenberg–Marquardt algorithm (LMA or just LM), also known as the damped least-squares (DLS) method, is used to solve non-linear least squares problems. These minimization problems arise especially in least sq ...
*
Nelder–Mead method The Nelder–Mead method (also downhill simplex method, amoeba method, or polytope method) is a numerical method used to find the minimum or maximum of an objective function in a multidimensional space. It is a direct search method (based on ...
* Pattern search (optimization) * Quasi-Newton methods * Symmetric rank-one


References


Further reading

* * * * * {{DEFAULTSORT:Broyden-Fletcher-Goldfarb-Shanno algorithm Optimization algorithms and methods