Non-linear least squares is the form of
least squares
The method of least squares is a mathematical optimization technique that aims to determine the best fit function by minimizing the sum of the squares of the differences between the observed values and the predicted values of the model. The me ...
analysis used to fit a set of ''m'' observations with a model that is non-linear in ''n'' unknown parameters (''m'' ≥ ''n''). It is used in some forms of
nonlinear regression
In statistics, nonlinear regression is a form of regression analysis in which observational data are modeled by a function which is a nonlinear combination of the model parameters and depends on one or more independent variables. The data are fi ...
. The basis of the method is to approximate the model by a linear one and to refine the parameters by successive iterations. There are many similarities to
linear least squares
Linear least squares (LLS) is the least squares approximation of linear functions to data.
It is a set of formulations for solving statistical problems involved in linear regression, including variants for ordinary (unweighted), weighted, and ...
, but also some
significant differences. In economic theory, the non-linear least squares method is applied in (i) the probit regression, (ii) threshold regression, (iii) smooth regression, (iv) logistic link regression, (v) Box–Cox transformed regressors (
).
Theory
Consider a set of
data points,
and a curve (model function)
that in addition to the variable
also depends on
parameters,
with
It is desired to find the vector
of parameters such that the curve fits best the given data in the least squares sense, that is, the sum of squares
is minimized, where the
residuals (in-sample prediction errors) are given by
for
The
minimum
In mathematical analysis, the maximum and minimum of a function are, respectively, the greatest and least value taken by the function. Known generically as extremum, they may be defined either within a given range (the ''local'' or ''relative ...
value of occurs when the
gradient
In vector calculus, the gradient of a scalar-valued differentiable function f of several variables is the vector field (or vector-valued function) \nabla f whose value at a point p gives the direction and the rate of fastest increase. The g ...
is zero. Since the model contains parameters there are gradient equations:
In a nonlinear system, the derivatives
are functions of both the independent variable and the parameters, so in general these gradient equations do not have a closed solution. Instead, initial values must be chosen for the parameters. Then, the parameters are refined iteratively, that is, the values are obtained by successive approximation,
Here, is an iteration number and the vector of increments,
is known as the shift vector. At each iteration the model is linearized by approximation to a first-order
Taylor polynomial expansion about
The
Jacobian matrix
In vector calculus, the Jacobian matrix (, ) of a vector-valued function of several variables is the matrix of all its first-order partial derivatives. If this matrix is square, that is, if the number of variables equals the number of component ...
, , is a function of constants, the independent variable ''and'' the parameters, so it changes from one iteration to the next. Thus, in terms of the linearized model,
and the residuals are given by
Substituting these expressions into the gradient equations, they become
which, on rearrangement, become simultaneous linear equations, the normal equations
The normal equations are written in matrix notation as
These equations form the basis for the
Gauss–Newton algorithm for a non-linear least squares problem.
Note the sign convention in the definition of the Jacobian matrix in terms of the derivatives. Formulas linear in
may appear with factor of
in other articles or the literature.
Extension by weights
When the observations are not equally reliable, a weighted sum of squares may be minimized,
Each element of the
diagonal
In geometry, a diagonal is a line segment joining two vertices of a polygon or polyhedron, when those vertices are not on the same edge. Informally, any sloping line is called diagonal. The word ''diagonal'' derives from the ancient Greek � ...
weight matrix should, ideally, be equal to the reciprocal of the error
variance
In probability theory and statistics, variance is the expected value of the squared deviation from the mean of a random variable. The standard deviation (SD) is obtained as the square root of the variance. Variance is a measure of dispersion ...
of the measurement.
The normal equations are then, more generally,
Geometrical interpretation
In linear least squares the
objective function
In mathematical optimization and decision theory, a loss function or cost function (sometimes also called an error function) is a function that maps an event or values of one or more variables onto a real number intuitively representing some "cost ...
, , is a
quadratic function
In mathematics, a quadratic function of a single variable (mathematics), variable is a function (mathematics), function of the form
:f(x)=ax^2+bx+c,\quad a \ne 0,
where is its variable, and , , and are coefficients. The mathematical expression, e ...
of the parameters.
When there is only one parameter the graph of with respect to that parameter will be a
parabola
In mathematics, a parabola is a plane curve which is Reflection symmetry, mirror-symmetrical and is approximately U-shaped. It fits several superficially different Mathematics, mathematical descriptions, which can all be proved to define exactl ...
. With two or more parameters the contours of with respect to any pair of parameters will be concentric
ellipse
In mathematics, an ellipse is a plane curve surrounding two focus (geometry), focal points, such that for all points on the curve, the sum of the two distances to the focal points is a constant. It generalizes a circle, which is the special ty ...
s (assuming that the normal equations matrix
is
positive definite). The minimum parameter values are to be found at the centre of the ellipses. The geometry of the general objective function can be described as paraboloid elliptical.
In NLLSQ the objective function is quadratic with respect to the parameters only in
a region close to its minimum value, where the truncated Taylor series is a good approximation to the model.
The more the parameter values differ from their optimal values, the more the contours deviate from elliptical shape. A consequence of this is that initial parameter estimates should be as close as practicable to their (unknown!) optimal values. It also explains how divergence can come about as the Gauss–Newton algorithm is convergent only when the objective function is approximately quadratic in the parameters.
Computation
Initial parameter estimates
Some problems of ill-conditioning and divergence can be corrected by finding initial parameter estimates that are near to the optimal values. A good way to do this is by
computer simulation
Computer simulation is the running of a mathematical model on a computer, the model being designed to represent the behaviour of, or the outcome of, a real-world or physical system. The reliability of some mathematical models can be determin ...
. Both the observed and calculated data are displayed on a screen. The parameters of the model are adjusted by hand until the agreement between observed and calculated data is reasonably good. Although this will be a subjective judgment, it is sufficient to find a good starting point for the non-linear refinement. Initial parameter estimates can be created using transformations or linearizations. Better still evolutionary algorithms such as the Stochastic Funnel Algorithm can lead to the convex basin of attraction that surrounds the optimal parameter estimates. Hybrid algorithms that use randomization and elitism, followed by Newton methods have been shown to be useful and computationally efficient.
Solution
Any method among the ones described
below can be applied to find a solution.
Convergence criteria
The common sense criterion for convergence is that the sum of squares does not increase from one iteration to the next. However this criterion is often difficult to implement in practice, for various reasons. A useful convergence criterion is
The value 0.0001 is somewhat arbitrary and may need to be changed. In particular it may need to be increased when experimental errors are large. An alternative criterion is
Again, the numerical value is somewhat arbitrary; 0.001 is equivalent to specifying that each parameter should be refined to 0.1% precision. This is reasonable when it is less than the largest relative standard deviation on the parameters.
Calculation of the Jacobian by numerical approximation
There are models for which it is either very difficult or even impossible to derive analytical expressions for the elements of the Jacobian. Then, the numerical approximation
is obtained by calculation of
for
and
. The increment,
, size should be chosen so the numerical derivative is not subject to approximation error by being too large, or
round-off error by being too small.
Parameter errors, confidence limits, residuals etc.
Some information is given in
the corresponding section on the
Weighted least squares
Weighted least squares (WLS), also known as weighted linear regression, is a generalization of ordinary least squares and linear regression in which knowledge of the unequal variance of observations (''heteroscedasticity'') is incorporated into ...
page.
Multiple minima
Multiple minima can occur in a variety of circumstances some of which are:
* A parameter is raised to a power of two or more. For example, when fitting data to a
Lorentzian curve
where
is the height,
is the position and
is the half-width at half height, there are two solutions for the half-width,
and
which give the same optimal value for the objective function.
* Two parameters can be interchanged without changing the value of the model. A simple example is when the model contains the product of two parameters, since
will give the same value as
.
* A parameter is in a trigonometric function, such as
, which has identical values at
. See
Levenberg–Marquardt algorithm for an example.
Not all multiple minima have equal values of the objective function. False minima, also known as local minima, occur when the objective function value is greater than its value at the so-called global minimum. To be certain that the minimum found is the global minimum, the refinement should be started with widely differing initial values of the parameters. When the same minimum is found regardless of starting point, it is likely to be the global minimum.
When multiple minima exist there is an important consequence: the objective function will have a stationary point (e.g. a maximum or a
saddle point
In mathematics, a saddle point or minimax point is a Point (geometry), point on the surface (mathematics), surface of the graph of a function where the slopes (derivatives) in orthogonal directions are all zero (a Critical point (mathematics), ...
) somewhere between two minima. The normal equations matrix is not positive definite at a stationary point in the objective function, because the gradient vanishes and no unique direction of descent exists. Refinement from a point (a set of parameter values) close to a stationary point will be ill-conditioned and should be avoided as a starting point. For example, when fitting a Lorentzian the normal equations matrix is not positive definite when the half-width of the Lorentzian is zero.
Transformation to a linear model
A non-linear model can sometimes be transformed into a linear one. Such an approximation is, for instance, often applicable in the vicinity of the best estimator, and it is one of the basic assumption in most iterative minimization algorithms.
When a linear approximation is valid, the model can directly be used for inference with a
generalized least squares
In statistics, generalized least squares (GLS) is a method used to estimate the unknown parameters in a Linear regression, linear regression model. It is used when there is a non-zero amount of correlation between the Residual (statistics), resi ...
, where the equations of the
Linear Template Fit
In mathematics, the term ''linear'' is used in two distinct senses for two different properties:
* linearity of a '' function'' (or '' mapping'');
* linearity of a ''polynomial''.
An example of a linear function is the function defined by f(x)= ...
apply.
Another example of a linear approximation would be when the model is a simple exponential function,
which can be transformed into a linear model by taking logarithms.
Graphically this corresponds to working on a
semi-log plot
In science and engineering, a semi-log plot/graph or semi-logarithmic plot/graph has one axis on a logarithmic scale, the other on a linear scale. It is useful for data with exponential relationships, where one variable covers a large range of ...
. The sum of squares becomes
This procedure should be avoided unless the errors are multiplicative and
log-normally distributed because it can give misleading results. This comes from the fact that whatever the experimental errors on might be, the errors on are different. Therefore, when the transformed sum of squares is minimized, different results will be obtained both for the parameter values and their calculated standard deviations. However, with multiplicative errors that are log-normally distributed, this procedure gives unbiased and consistent parameter estimates.
Another example is furnished by
Michaelis–Menten kinetics, used to determine two parameters
and
:
The
Lineweaver–Burk plot
of
against
is linear in the parameters
and
but very sensitive to data error and strongly biased toward fitting the data in a particular range of the independent variable