HOME

TheInfoList



OR:

In numerical analysis, Broyden's method is a quasi-Newton method for finding roots in variables. It was originally described by C. G. Broyden in 1965.
Newton's method In numerical analysis, the Newton–Raphson method, also known simply as Newton's method, named after Isaac Newton and Joseph Raphson, is a root-finding algorithm which produces successively better approximations to the roots (or zeroes) of a ...
for solving uses the
Jacobian matrix In vector calculus, the Jacobian matrix (, ) of a vector-valued function of several variables is the matrix of all its first-order partial derivatives. If this matrix is square, that is, if the number of variables equals the number of component ...
, , at every iteration. However, computing this Jacobian can be a difficult and expensive operation; for large problems such as those involving solving the Kohn–Sham equations in
quantum mechanics Quantum mechanics is the fundamental physical Scientific theory, theory that describes the behavior of matter and of light; its unusual characteristics typically occur at and below the scale of atoms. Reprinted, Addison-Wesley, 1989, It is ...
the number of variables can be in the hundreds of thousands. The idea behind Broyden's method is to compute the whole Jacobian at most only at the first iteration, and to do rank-one updates at other iterations. In 1979 Gay proved that when Broyden's method is applied to a linear system of size , it terminates in steps, although like all quasi-Newton methods, it may not converge for nonlinear systems.


Description of the method


Solving single-variable nonlinear equation

In the secant method, we replace the first derivative at with the finite-difference approximation: :f'(x_n) \simeq \frac, and proceed similar to
Newton's method In numerical analysis, the Newton–Raphson method, also known simply as Newton's method, named after Isaac Newton and Joseph Raphson, is a root-finding algorithm which produces successively better approximations to the roots (or zeroes) of a ...
: :x_ = x_n - \frac where is the iteration index.


Solving a system of nonlinear equations

Consider a system of nonlinear equations in k unknowns :\mathbf f(\mathbf x) = \mathbf 0 , where is a vector-valued function of vector :\mathbf x = (x_1, x_2, x_3, \dotsc, x_k), :\mathbf f(\mathbf x) = \big(f_1(x_1, x_2, \dotsc, x_k), f_2(x_1, x_2, \dotsc, x_k), \dotsc, f_k(x_1, x_2, \dotsc, x_k)\big). For such problems, Broyden gives a variation of the one-dimensional Newton's method, replacing the derivative with an approximate Jacobian . The approximate Jacobian matrix is determined iteratively based on the secant equation, a finite-difference approximation: :\mathbf J_n (\mathbf x_n - \mathbf x_) \simeq \mathbf f(\mathbf x_n) - \mathbf f(\mathbf x_), where is the iteration index. For clarity, define :\mathbf f_n = \mathbf f(\mathbf x_n), :\Delta \mathbf x_n = \mathbf x_n - \mathbf x_, :\Delta \mathbf f_n = \mathbf f_n - \mathbf f_, so the above may be rewritten as :\mathbf J_n \Delta \mathbf x_n \simeq \Delta \mathbf f_n. The above equation is underdetermined when is greater than one. Broyden suggested using the most recent estimate of the Jacobian matrix, , and then improving upon it by requiring that the new form is a solution to the most recent secant equation, and that there is minimal modification to : :\mathbf J_n = \mathbf J_ + \frac \Delta \mathbf x_n^. This minimizes the
Frobenius norm In the field of mathematics, norms are defined for elements within a vector space. Specifically, when the vector space comprises matrices, such norms are referred to as matrix norms. Matrix norms differ from vector norms in that they must also ...
:\, \mathbf J_n - \mathbf J_\, _ . One then updates the variables using the approximate Jacobian, what is called a quasi-Newton approach. :\mathbf x_ = \mathbf x_n - \alpha \mathbf J_n^ \mathbf f(\mathbf x_n) . If \alpha = 1 this is the full Newton step; commonly a line search or trust region method is used to control \alpha. The initial Jacobian can be taken as a diagonal, unit matrix, although more common is to scale it based upon the first step. Broyden also suggested using the Sherman–Morrison formula to directly update the inverse of the approximate Jacobian matrix: :\mathbf J_n^ = \mathbf J_^ + \frac \Delta \mathbf x_n^ \mathbf J^_. This first method is commonly known as the "good Broyden's method." A similar technique can be derived by using a slightly different modification to . This yields a second method, the so-called "bad Broyden's method": :\mathbf J_n^ = \mathbf J_^ + \frac \Delta \mathbf f_n^. This minimizes a different Frobenius norm :\, \mathbf J_n^ - \mathbf J_^\, _. In his original paper Broyden could not get the bad method to work, but there are cases where it does for which several explanations have been proposed. Many other quasi-Newton schemes have been suggested in
optimization Mathematical optimization (alternatively spelled ''optimisation'') or mathematical programming is the selection of a best element, with regard to some criteria, from some set of available alternatives. It is generally divided into two subfiel ...
such as the BFGS, where one seeks a maximum or minimum by finding zeros of the first derivatives (zeros of the
gradient In vector calculus, the gradient of a scalar-valued differentiable function f of several variables is the vector field (or vector-valued function) \nabla f whose value at a point p gives the direction and the rate of fastest increase. The g ...
in multiple dimensions). The Jacobian of the gradient is called the Hessian and is symmetric, adding further constraints to its approximation.


The Broyden Class of Methods

In addition to the two methods described above, Broyden defined a wider class of related methods. In general, methods in the ''Broyden class'' are given in the form \mathbf_=\mathbf_k-\frac+\frac+\phi_k\left(s_k^T \mathbf_k s_k\right) v_k v_k^T, where y_k := \mathbf(\mathbf_) - \mathbf(\mathbf_), s_k := \mathbf_ - \mathbf_k, and v_k = \left frac - \frac\right and \phi_k \in \mathbb for each k = 1, 2, .... The choice of \phi_k determines the method. Other methods in the Broyden class have been introduced by other authors. * The Davidon–Fletcher–Powell (DFP) method, which is the only member of this class being published before the two methods defined by Broyden. For the DFP method, \phi_k = 1. * Anderson's iterative method, which uses a least squares approach to the Jacobian. * Schubert's or sparse Broyden algorithm – a modification for sparse Jacobian matrices. * The Pulay approach, often used in
density functional theory Density functional theory (DFT) is a computational quantum mechanical modelling method used in physics, chemistry and materials science to investigate the electronic structure (or nuclear structure) (principally the ground state) of many-body ...
. * A limited memory method by Srivastava for the root finding problem which only uses a few recent iterations. * Klement (2014) – uses fewer iterations to solve some systems. * Multisecant methods for
density functional theory Density functional theory (DFT) is a computational quantum mechanical modelling method used in physics, chemistry and materials science to investigate the electronic structure (or nuclear structure) (principally the ground state) of many-body ...
problems


See also

* Secant method *
Newton's method In numerical analysis, the Newton–Raphson method, also known simply as Newton's method, named after Isaac Newton and Joseph Raphson, is a root-finding algorithm which produces successively better approximations to the roots (or zeroes) of a ...
* Quasi-Newton method *
Newton's method in optimization In calculus, Newton's method (also called Newton–Raphson) is an iterative method for finding the roots of a differentiable function f, which are solutions to the equation f(x)=0. However, to optimize a twice-differentiable f, our goal is to f ...
* Davidon–Fletcher–Powell formula * Broyden–Fletcher–Goldfarb–Shanno (BFGS) method


References


Further reading

* * *


External links


Simple basic explanation: The story of the blind archer
{{Root-finding algorithms Quasi-Newton methods