Gauss–Newton algorithm
   HOME

TheInfoList



OR:

The Gauss–Newton algorithm is used to solve non-linear least squares problems, which is equivalent to minimizing a sum of squared function values. It is an extension of
Newton's method In numerical analysis, Newton's method, also known as the Newton–Raphson method, named after Isaac Newton and Joseph Raphson, is a root-finding algorithm which produces successively better approximations to the roots (or zeroes) of a real- ...
for finding a
minimum In mathematical analysis, the maxima and minima (the respective plurals of maximum and minimum) of a function, known collectively as extrema (the plural of extremum), are the largest and smallest value of the function, either within a given r ...
of a non-linear
function Function or functionality may refer to: Computing * Function key, a type of key on computer keyboards * Function model, a structured representation of processes in a system * Function object or functor or functionoid, a concept of object-oriente ...
. Since a sum of squares must be nonnegative, the algorithm can be viewed as using Newton's method to iteratively approximate zeroes of the components of the sum, and thus minimizing the sum. It has the advantage that second derivatives, which can be challenging to compute, are not required. Non-linear least squares problems arise, for instance, in non-linear regression, where parameters in a model are sought such that the model is in good agreement with available observations. The method is named after the mathematicians
Carl Friedrich Gauss Johann Carl Friedrich Gauss (; german: Gauß ; la, Carolus Fridericus Gauss; 30 April 177723 February 1855) was a German mathematician and physicist who made significant contributions to many fields in mathematics and science. Sometimes refer ...
and
Isaac Newton Sir Isaac Newton (25 December 1642 – 20 March 1726/27) was an English mathematician, physicist, astronomer, alchemist, Theology, theologian, and author (described in his time as a "natural philosophy, natural philosopher"), widely ...
, and first appeared in Gauss' 1809 work ''Theoria motus corporum coelestium in sectionibus conicis solem ambientum''.


Description

Given m functions \textbf = (r_1, \ldots, r_m) (often called residuals) of n variables \boldsymbol = (\beta_1, \ldots \beta_n), with m \geq n, the Gauss–Newton algorithm iteratively finds the value of the variables that minimize the sum of squaresBjörck (1996) S(\boldsymbol \beta) = \sum_^m r_i(\boldsymbol \beta)^. Starting with an initial guess \boldsymbol \beta^ for the minimum, the method proceeds by the iterations \boldsymbol \beta^ = \boldsymbol \beta^ - \left(\mathbf^\mathsf \mathbf \right)^ \mathbf^\mathsf \mathbf\left(\boldsymbol \beta^\right), where, if r and ''β'' are column vectors, the entries of the
Jacobian matrix In vector calculus, the Jacobian matrix (, ) of a vector-valued function of several variables is the matrix of all its first-order partial derivatives. When this matrix is square, that is, when the function takes the same number of variable ...
are \left(\mathbf\right)_ = \frac, and the symbol ^\mathsf denotes the
matrix transpose In linear algebra, the transpose of a matrix is an operator which flips a matrix over its diagonal; that is, it switches the row and column indices of the matrix by producing another matrix, often denoted by (among other notations). The tr ...
. At each iteration, the update \Delta = \boldsymbol \beta^ - \boldsymbol \beta^ can be found by rearranging the previous equation in the following two steps: *\Delta = -\left(\mathbf^\mathsf \mathbf \right)^ \mathbf^\mathsf \mathbf\left(\boldsymbol \beta^\right) *\mathbf^\mathsf \mathbf \Delta = -\mathbf^\mathsf \mathbf\left(\boldsymbol \beta^\right) With substitutions A = \mathbf^\mathsf \mathbf , \mathbf = -\mathbf^\mathsf \mathbf\left(\boldsymbol \beta^\right) , and \mathbf = \Delta , this turns into the conventional matrix equation of form A\mathbf = \mathbf , which can then be solved in a variety of methods (see
Notes Note, notes, or NOTE may refer to: Music and entertainment * Musical note, a pitched sound (or a symbol for a sound) in music * ''Notes'' (album), a 1987 album by Paul Bley and Paul Motian * ''Notes'', a common (yet unofficial) shortened versio ...
). If , the iteration simplifies to \boldsymbol \beta^ = \boldsymbol \beta^ - \left(\mathbf\right)^ \mathbf\left(\boldsymbol \beta^\right), which is a direct generalization of
Newton's method In numerical analysis, Newton's method, also known as the Newton–Raphson method, named after Isaac Newton and Joseph Raphson, is a root-finding algorithm which produces successively better approximations to the roots (or zeroes) of a real- ...
in one dimension. In data fitting, where the goal is to find the parameters \boldsymbol such that a given model function \mathbf(\mathbf, \boldsymbol) best fits some data points (x_i, y_i) , the functions r_i are the residuals: r_i(\boldsymbol \beta) = y_i - f\left(x_i, \boldsymbol \beta\right). Then, the Gauss–Newton method can be expressed in terms of the Jacobian \mathbf = -\mathbf of the function \mathbf as \boldsymbol \beta^ = \boldsymbol \beta^ + \left(\mathbf^\mathsf \mathbf \right)^ \mathbf^\mathsf \mathbf\left(\boldsymbol \beta^\right). Note that \left(\mathbf^\mathsf \mathbf\right)^ \mathbf^\mathsf is the left pseudoinverse of \mathbf.


Notes

The assumption in the algorithm statement is necessary, as otherwise the matrix \mathbf^T\mathbf is not invertible and the normal equations cannot be solved (at least uniquely). The Gauss–Newton algorithm can be derived by linearly approximating the vector of functions ''r''''i''. Using
Taylor's theorem In calculus, Taylor's theorem gives an approximation of a ''k''-times differentiable function around a given point by a polynomial of degree ''k'', called the ''k''th-order Taylor polynomial. For a smooth function, the Taylor polynomial is th ...
, we can write at every iteration: \mathbf(\boldsymbol \beta) \approx \mathbf\left(\boldsymbol \beta^\right) + \mathbf\left(\boldsymbol \beta^\right)\Delta with \Delta = \boldsymbol \beta - \boldsymbol \beta^. The task of finding \Delta minimizing the sum of squares of the right-hand side; i.e., \min \left\, \mathbf\left(\boldsymbol \beta^\right) + \mathbf\left(\boldsymbol \beta^\right)\Delta\right\, _2^2, is a
linear least-squares Linear least squares (LLS) is the least squares approximation of linear functions to data. It is a set of formulations for solving statistical problems involved in linear regression, including variants for ordinary (unweighted), weighted, and ...
problem, which can be solved explicitly, yielding the normal equations in the algorithm. The normal equations are ''n'' simultaneous linear equations in the unknown increments \Delta . They may be solved in one step, using
Cholesky decomposition In linear algebra, the Cholesky decomposition or Cholesky factorization (pronounced ) is a decomposition of a Hermitian, positive-definite matrix into the product of a lower triangular matrix and its conjugate transpose, which is useful for effici ...
, or, better, the
QR factorization In linear algebra, a QR decomposition, also known as a QR factorization or QU factorization, is a decomposition of a matrix ''A'' into a product ''A'' = ''QR'' of an orthogonal matrix ''Q'' and an upper triangular matrix ''R''. QR decom ...
of \mathbf . For large systems, an
iterative method In computational mathematics, an iterative method is a mathematical procedure that uses an initial value to generate a sequence of improving approximate solutions for a class of problems, in which the ''n''-th approximation is derived from the pre ...
, such as the
conjugate gradient In mathematics, the conjugate gradient method is an algorithm for the numerical solution of particular systems of linear equations, namely those whose matrix is positive-definite. The conjugate gradient method is often implemented as an itera ...
method, may be more efficient. If there is a linear dependence between columns of Jr, the iterations will fail, as \mathbf^T\mathbf becomes singular. When \mathbf is complex \mathbf:\Complex^n \to \Complex the conjugate form should be used: \left(\overline \mathbf^\mathsf \mathbf\right)^\overline \mathbf^\mathsf.


Example

In this example, the Gauss–Newton algorithm will be used to fit a model to some data by minimizing the sum of squares of errors between the data and model's predictions. In a biology experiment studying the relation between substrate concentration and reaction rate in an enzyme-mediated reaction, the data in the following table were obtained. It is desired to find a curve (model function) of the form \text = \frac that fits best the data in the least-squares sense, with the parameters V_\text and K_M to be determined. Denote by x_i and y_i the values of and rate respectively, with i = 1, \dots, 7. Let \beta_1 = V_\text and \beta_2 = K_M. We will find \beta_1 and \beta_2 such that the sum of squares of the residuals r_i = y_i - \frac, \quad (i = 1, \dots, 7) is minimized. The Jacobian \mathbf of the vector of residuals r_i with respect to the unknowns \beta_j is a 7 \times 2 matrix with the i-th row having the entries \frac = -\frac; \quad \frac = \frac. Starting with the initial estimates of \beta_1 = 0.9 and \beta_2 = 0.2, after five iterations of the Gauss–Newton algorithm, the optimal values \hat\beta_1 = 0.362 and \hat\beta_2 = 0.556 are obtained. The sum of squares of residuals decreased from the initial value of 1.445 to 0.00784 after the fifth iteration. The plot in the figure on the right shows the curve determined by the model for the optimal parameters with the observed data.


Convergence properties

The Gauss-Newton iteration is guaranteed to converge toward a local minimum point \hat under 4 conditions : The functions r_1,\ldots,r_m are twice continuously differentiable in an open convex set D\ni\hat, the Jacobian \mathbf_\mathbf(\hat) is of full column rank, the initial iterate \beta^ is near \hat, and the local minimum value , S(\hat), is small. The convergence is quadratic if , S(\hat), =0. It can be shown that the increment Δ is a
descent direction In optimization, a descent direction is a vector \mathbf\in\mathbb R^n that, in the sense below, moves us closer towards a local minimum \mathbf^* of our objective function f:\mathbb R^n\to\mathbb R. Suppose we are computing \mathbf^* by an iterat ...
for , and, if the algorithm converges, then the limit is a
stationary point In mathematics, particularly in calculus, a stationary point of a differentiable function of one variable is a point on the graph of the function where the function's derivative is zero. Informally, it is a point where the function "stops" in ...
of . For large minimum value , S(\hat), , however, convergence is not guaranteed, not even
local convergence In numerical analysis, an iterative method is called locally convergent if the successive approximations produced by the method are guaranteed to converge to a solution when the initial approximation is already close enough to the solution. Iterati ...
as in
Newton's method In numerical analysis, Newton's method, also known as the Newton–Raphson method, named after Isaac Newton and Joseph Raphson, is a root-finding algorithm which produces successively better approximations to the roots (or zeroes) of a real- ...
, or convergence under the usual Wolfe conditions. The rate of convergence of the Gauss–Newton algorithm can approach quadratic. The algorithm may converge slowly or not at all if the initial guess is far from the minimum or the matrix \mathbf is
ill-conditioned In numerical analysis, the condition number of a function measures how much the output value of the function can change for a small change in the input argument. This is used to measure how sensitive a function is to changes or errors in the input ...
. For example, consider the problem with m = 2 equations and n = 1 variable, given by \begin r_1(\beta) &= \beta + 1, \\ r_2(\beta) &= \lambda \beta^2 + \beta - 1. \end The optimum is at \beta = 0. (Actually the optimum is at \beta = -1 for \lambda = 2, because S(0) = 1^2 + (-1)^2 = 2, but S(-1) = 0.) If \lambda = 0, then the problem is in fact linear and the method finds the optimum in one iteration. If , λ, < 1, then the method converges linearly and the error decreases asymptotically with a factor , λ, at every iteration. However, if , λ, > 1, then the method does not even converge locally.


Solving overdetermined systems of equations

The Gauss-Newton iteration \mathbf^ = \mathbf^ - J(\mathbf^)^\dagger \mathbf(\mathbf^) for j=0,1,\ldots is an effective method for solving
overdetermined system In mathematics, a system of equations is considered overdetermined if there are more equations than unknowns. An overdetermined system is almost always inconsistent (it has no solution) when constructed with random coefficients. However, an over ...
s of equations in the form of \mathbf(\mathbf)=\mathbf with \mathbf(\mathbf) = \left begin f_1(x_1,\ldots,x_n) \\ \vdots \\ f_m(x_1,\ldots,x_n) \end\right/math> and m>n where J(\mathbf)^\dagger is the Moore-Penrose inverse (also known as pseudoinverse) of the
Jacobian matrix In vector calculus, the Jacobian matrix (, ) of a vector-valued function of several variables is the matrix of all its first-order partial derivatives. When this matrix is square, that is, when the function takes the same number of variable ...
J(\mathbf) of \mathbf(\mathbf). It can be considered an extension of
Newton's method In numerical analysis, Newton's method, also known as the Newton–Raphson method, named after Isaac Newton and Joseph Raphson, is a root-finding algorithm which produces successively better approximations to the roots (or zeroes) of a real- ...
and enjoys the same local quadratic convergence toward isolated regular solutions. If the solution doesn't exist but the initial iterate \mathbf^ is near a point \hat = (\hat_1,\ldots,\hat_n) at which the sum of squares \sum_^m , f_i(x_1,\ldots,x_n), ^2 \equiv \, \mathbf(\mathbf)\, _2^2 reaches a small local minimum, the Gauss-Newton iteration linearly converges to \hat. The point \hat is often called a
least squares The method of least squares is a standard approach in regression analysis to approximate the solution of overdetermined systems (sets of equations in which there are more equations than unknowns) by minimizing the sum of the squares of the re ...
solution of the overdetermined system.


Derivation from Newton's method

In what follows, the Gauss–Newton algorithm will be derived from
Newton's method In numerical analysis, Newton's method, also known as the Newton–Raphson method, named after Isaac Newton and Joseph Raphson, is a root-finding algorithm which produces successively better approximations to the roots (or zeroes) of a real- ...
for function optimization via an approximation. As a consequence, the rate of convergence of the Gauss–Newton algorithm can be quadratic under certain regularity conditions. In general (under weaker conditions), the convergence rate is linear. The recurrence relation for Newton's method for minimizing a function ''S'' of parameters \boldsymbol\beta is \boldsymbol\beta^ = \boldsymbol\beta^ - \mathbf H^ \mathbf g, where g denotes the
gradient vector In vector calculus, the gradient of a scalar-valued differentiable function of several variables is the vector field (or vector-valued function) \nabla f whose value at a point p is the "direction and rate of fastest increase". If the grad ...
of ''S'', and H denotes the
Hessian matrix In mathematics, the Hessian matrix or Hessian is a square matrix of second-order partial derivatives of a scalar-valued function, or scalar field. It describes the local curvature of a function of many variables. The Hessian matrix was developed ...
of ''S''. Since S = \sum_^m r_i^2, the gradient is given by g_j = 2 \sum_^m r_i \frac. Elements of the Hessian are calculated by differentiating the gradient elements, g_j, with respect to \beta_k: H_ = 2 \sum_^m \left(\frac \frac + r_i \frac\right). The Gauss–Newton method is obtained by ignoring the second-order derivative terms (the second term in this expression). That is, the Hessian is approximated by H_ \approx 2 \sum_^m J_ J_, where J_ = / are entries of the Jacobian Jr. Note that when the exact hessian is evaluated near an exact fit we have near-zero r_i, so the second term becomes near-zero as well, which justifies the approximation. The gradient and the approximate Hessian can be written in matrix notation as \mathbf = 2 ^\mathsf \mathbf, \quad \mathbf \approx 2 ^\mathsf \mathbf. These expressions are substituted into the recurrence relation above to obtain the operational equations \boldsymbol^ = \boldsymbol\beta^ + \Delta; \quad \Delta = -\left(\mathbf^\mathsf \mathbf\right)^ \mathbf^\mathsf \mathbf. Convergence of the Gauss–Newton method is not guaranteed in all instances. The approximation \left, r_i \frac\ \ll \left, \frac \frac\ that needs to hold to be able to ignore the second-order derivative terms may be valid in two cases, for which convergence is to be expected: # The function values r_i are small in magnitude, at least around the minimum. # The functions are only "mildly" nonlinear, so that \frac is relatively small in magnitude.


Improved versions

With the Gauss–Newton method the sum of squares of the residuals ''S'' may not decrease at every iteration. However, since Δ is a descent direction, unless S\left(\boldsymbol \beta^s\right) is a stationary point, it holds that S\left(\boldsymbol \beta^s + \alpha\Delta\right) < S\left(\boldsymbol \beta^s\right) for all sufficiently small \alpha>0. Thus, if divergence occurs, one solution is to employ a fraction \alpha of the increment vector Δ in the updating formula: \boldsymbol \beta^ = \boldsymbol \beta^s + \alpha \Delta. In other words, the increment vector is too long, but it still points "downhill", so going just a part of the way will decrease the objective function ''S''. An optimal value for \alpha can be found by using a
line search In optimization, the line search strategy is one of two basic iterative approaches to find a local minimum \mathbf^* of an objective function f:\mathbb R^n\to\mathbb R. The other approach is trust region. The line search approach first finds a ...
algorithm, that is, the magnitude of \alpha is determined by finding the value that minimizes ''S'', usually using a direct search method in the interval 0 < \alpha < 1 or a backtracking line search such as Armijo-line search. Typically, \alpha should be chosen such that it satisfies the
Wolfe conditions In the unconstrained minimization problem, the Wolfe conditions are a set of inequalities for performing inexact line search, especially in quasi-Newton methods, first published by Philip Wolfe in 1969. In these methods the idea is to find ::\mi ...
or the Goldstein conditions. In cases where the direction of the shift vector is such that the optimal fraction α is close to zero, an alternative method for handling divergence is the use of the
Levenberg–Marquardt algorithm In mathematics and computing, the Levenberg–Marquardt algorithm (LMA or just LM), also known as the damped least-squares (DLS) method, is used to solve non-linear least squares problems. These minimization problems arise especially in least sq ...
, a
trust region In mathematical optimization, a trust region is the subset of the region of the objective function that is approximated using a model function (often a quadratic). If an adequate model of the objective function is found within the trust region, the ...
method. The normal equations are modified in such a way that the increment vector is rotated towards the direction of
steepest descent In mathematics, gradient descent (also often called steepest descent) is a first-order iterative optimization algorithm for finding a local minimum of a differentiable function. The idea is to take repeated steps in the opposite direction of the ...
, \left(\mathbf\right) \Delta = -\mathbf^\mathsf \mathbf, where D is a positive diagonal matrix. Note that when D is the identity matrix I and \lambda \to +\infty, then \lambda \Delta = \lambda \left(\mathbf + \lambda \mathbf\right)^ \left(-\mathbf^\mathsf \mathbf\right) = \left(\mathbf - \mathbf / \lambda + \cdots \right) \left(-\mathbf^\mathsf \mathbf\right) \to -\mathbf^\mathsf \mathbf, therefore the direction of Δ approaches the direction of the negative gradient -\mathbf^\mathsf \mathbf. The so-called Marquardt parameter \lambda may also be optimized by a line search, but this is inefficient, as the shift vector must be recalculated every time \lambda is changed. A more efficient strategy is this: When divergence occurs, increase the Marquardt parameter until there is a decrease in ''S''. Then retain the value from one iteration to the next, but decrease it if possible until a cut-off value is reached, when the Marquardt parameter can be set to zero; the minimization of ''S'' then becomes a standard Gauss–Newton minimization.


Large-scale optimization

For large-scale optimization, the Gauss–Newton method is of special interest because it is often (though certainly not always) true that the matrix \mathbf_\mathbf is more sparse than the approximate Hessian \mathbf_\mathbf^\mathsf \mathbf. In such cases, the step calculation itself will typically need to be done with an approximate iterative method appropriate for large and sparse problems, such as the
conjugate gradient method In mathematics, the conjugate gradient method is an algorithm In mathematics and computer science, an algorithm () is a finite sequence of rigorous instructions, typically used to solve a class of specific problems or to perform a c ...
. In order to make this kind of approach work, one needs at least an efficient method for computing the product ^\mathsf \mathbf \mathbf for some vector p. With
sparse matrix In numerical analysis and scientific computing, a sparse matrix or sparse array is a matrix in which most of the elements are zero. There is no strict definition regarding the proportion of zero-value elements for a matrix to qualify as sparse b ...
storage, it is in general practical to store the rows of \mathbf_\mathbf in a compressed form (e.g., without zero entries), making a direct computation of the above product tricky due to the transposition. However, if one defines c''i'' as row ''i'' of the matrix \mathbf_\mathbf, the following simple relation holds: ^\mathsf\mathbf \mathbf = \sum_i \mathbf c_i \left(\mathbf c_i \cdot \mathbf\right), so that every row contributes additively and independently to the product. In addition to respecting a practical sparse storage structure, this expression is well suited for parallel computations. Note that every row c''i'' is the gradient of the corresponding residual ''r''''i''; with this in mind, the formula above emphasizes the fact that residuals contribute to the problem independently of each other.


Related algorithms

In a
quasi-Newton method Quasi-Newton methods are methods used to either find zeroes or local maxima and minima of functions, as an alternative to Newton's method. They can be used if the Jacobian or Hessian is unavailable or is too expensive to compute at every iteration. ...
, such as that due to Davidon, Fletcher and Powell or Broyden–Fletcher–Goldfarb–Shanno ( BFGS method) an estimate of the full Hessian \frac is built up numerically using first derivatives \frac only so that after ''n'' refinement cycles the method closely approximates to Newton's method in performance. Note that quasi-Newton methods can minimize general real-valued functions, whereas Gauss–Newton, Levenberg–Marquardt, etc. fits only to nonlinear least-squares problems. Another method for solving minimization problems using only first derivatives is
gradient descent In mathematics, gradient descent (also often called steepest descent) is a first-order iterative optimization algorithm for finding a local minimum of a differentiable function. The idea is to take repeated steps in the opposite direction of the ...
. However, this method does not take into account the second derivatives even approximately. Consequently, it is highly inefficient for many functions, especially if the parameters have strong interactions.


Notes


References

* * . *


External links

*
Probability, Statistics and Estimation
' The algorithm is detailed and applied to the biology experiment discussed as an example in this article (page 84 with the uncertainties on the estimated values).


Implementations


Artelys Knitro
is a non-linear solver with an implementation of the Gauss–Newton method. It is written in C and has interfaces to C++/C#/Java/Python/MATLAB/R. {{DEFAULTSORT:Gauss-Newton algorithm Optimization algorithms and methods Least squares Statistical algorithms