Numerical methods for linear least squares entails the
numerical analysis
Numerical analysis is the study of algorithms that use numerical approximation (as opposed to symbolic computation, symbolic manipulations) for the problems of mathematical analysis (as distinguished from discrete mathematics). It is the study of ...
of
linear least squares
Linear least squares (LLS) is the least squares approximation of linear functions to data.
It is a set of formulations for solving statistical problems involved in linear regression, including variants for ordinary (unweighted), weighted, and ...
problems.
Introduction
A general approach to the least squares problem
can be described as follows. Suppose that we can find an ''n'' by ''m'' matrix S
such that XS is an
orthogonal projection
In linear algebra and functional analysis, a projection is a linear transformation P from a vector space to itself (an endomorphism) such that P\circ P=P. That is, whenever P is applied twice to any vector, it gives the same result as if it we ...
onto the image of X. Then a solution to our minimization problem is given by
:
simply because
:
is exactly a sought for orthogonal projection of
onto an image of X
(
see the picture below and note that as explained in the
next section the image of X is just a subspace generated by column vectors of X).
A few popular ways to find such a matrix ''S'' are described below.
Inverting the matrix of the normal equations
The equation
is known as the normal equation.
The algebraic solution of the normal equations with a full-rank matrix ''X''
T''X'' can be written as
:
where ''X''
+ is the
Moore–Penrose pseudoinverse of ''X''. Although this equation is correct and can work in many applications, it is not computationally efficient to invert the normal-equations matrix (the
Gramian matrix). An exception occurs in
numerical smoothing and differentiation where an analytical expression is required.
If the matrix ''X''
T''X'' is
well-conditioned and
positive definite, implying that it has full
rank, the normal equations can be solved directly by using the
Cholesky decomposition
In linear algebra, the Cholesky decomposition or Cholesky factorization (pronounced ) is a decomposition of a Hermitian, positive-definite matrix into the product of a lower triangular matrix and its conjugate transpose, which is useful for eff ...
''R''
T''R'', where ''R'' is an upper
triangular matrix
In mathematics, a triangular matrix is a special kind of square matrix. A square matrix is called if all the entries ''above'' the main diagonal are zero. Similarly, a square matrix is called if all the entries ''below'' the main diagonal are z ...
, giving:
:
The solution is obtained in two stages, a
forward substitution step, solving for z:
:
followed by a backward substitution, solving for
:
:
Both substitutions are facilitated by the triangular nature of ''R''.
Orthogonal decomposition methods
Orthogonal decomposition methods of solving the least squares problem are slower than the normal equations method but are more
numerically stable because they avoid forming the product ''X''
T''X''.
The residuals are written in matrix notation as
:
The matrix ''X'' is subjected to an orthogonal decomposition, e.g., the
QR decomposition
In linear algebra, a QR decomposition, also known as a QR factorization or QU factorization, is a decomposition of a matrix ''A'' into a product ''A'' = ''QR'' of an orthonormal matrix ''Q'' and an upper triangular matrix ''R''. QR decom ...
as follows.
:
,
where ''Q'' is an ''m''×''m''
orthogonal matrix
In linear algebra, an orthogonal matrix, or orthonormal matrix, is a real square matrix whose columns and rows are orthonormal vectors.
One way to express this is
Q^\mathrm Q = Q Q^\mathrm = I,
where is the transpose of and is the identi ...
(''Q''
T''Q=I'') and ''R'' is an ''n''×''n'' upper triangular matrix with
.
The residual vector is left-multiplied by ''Q''
T.
:
Because ''Q'' is
orthogonal
In mathematics, orthogonality (mathematics), orthogonality is the generalization of the geometric notion of ''perpendicularity''. Although many authors use the two terms ''perpendicular'' and ''orthogonal'' interchangeably, the term ''perpendic ...
, the sum of squares of the residuals, ''s'', may be written as:
:
Since v doesn't depend on ''β'', the minimum value of ''s'' is attained when the upper block, u, is zero. Therefore, the parameters are found by solving:
:
These equations are easily solved as ''R'' is upper triangular.
An alternative decomposition of ''X'' is the
singular value decomposition
In linear algebra, the singular value decomposition (SVD) is a Matrix decomposition, factorization of a real number, real or complex number, complex matrix (mathematics), matrix into a rotation, followed by a rescaling followed by another rota ...
(SVD)
:
,
where ''U'' is ''m'' by ''m'' orthogonal matrix, ''V'' is ''n'' by ''n'' orthogonal matrix and
is an ''m'' by ''n'' matrix with all its elements outside of the main diagonal equal to ''0''. The
pseudoinverse
In mathematics, and in particular, algebra, a generalized inverse (or, g-inverse) of an element ''x'' is an element ''y'' that has some properties of an inverse element but not necessarily all of them. The purpose of constructing a generalized inv ...
of
is easily obtained by inverting its non-zero diagonal elements and transposing. Hence,
:
where ''P'' is obtained from
by replacing its non-zero diagonal elements with ones. Since
(the property of pseudoinverse), the matrix
is an orthogonal projection onto the image (column-space) of ''X''. In accordance with a general approach described in the introduction above (find XS which is an orthogonal projection),
:
,
and thus,
:
is a solution of a least squares problem. This method is the most computationally intensive, but is particularly useful if the normal equations matrix, ''X''
T''X'', is very ill-conditioned (i.e. if its
condition number multiplied by the machine's relative
round-off error
In computing, a roundoff error, also called rounding error, is the difference between the result produced by a given algorithm using exact arithmetic and the result produced by the same algorithm using finite-precision, rounded arithmetic. Roun ...
is appreciably large). In that case, including the smallest
singular values in the inversion merely adds numerical noise to the solution. This can be cured with the truncated SVD approach, giving a more stable and exact answer, by explicitly setting to zero all singular values below a certain threshold and so ignoring them, a process closely related to
factor analysis
Factor analysis is a statistical method used to describe variability among observed, correlated variables in terms of a potentially lower number of unobserved variables called factors. For example, it is possible that variations in six observe ...
.
Discussion
The numerical methods for linear least squares are important because
linear regression
In statistics, linear regression is a statistical model, model that estimates the relationship between a Scalar (mathematics), scalar response (dependent variable) and one or more explanatory variables (regressor or independent variable). A mode ...
models are among the most important types of model, both as formal
statistical model
A statistical model is a mathematical model that embodies a set of statistical assumptions concerning the generation of Sample (statistics), sample data (and similar data from a larger Statistical population, population). A statistical model repre ...
s and for exploration of data-sets. The majority of
statistical computer packages contain facilities for regression analysis that make use of linear least squares computations. Hence it is appropriate that considerable effort has been devoted to the task of ensuring that these computations are undertaken efficiently and with due regard to
round-off error
In computing, a roundoff error, also called rounding error, is the difference between the result produced by a given algorithm using exact arithmetic and the result produced by the same algorithm using finite-precision, rounded arithmetic. Roun ...
.
Individual statistical analyses are seldom undertaken in isolation, but rather are part of a sequence of investigatory steps. Some of the topics involved in considering numerical methods for linear least squares relate to this point. Thus important topics can be
*Computations where a number of similar, and often
nested, models are considered for the same data-set. That is, where models with the same
dependent variable
A variable is considered dependent if it depends on (or is hypothesized to depend on) an independent variable. Dependent variables are studied under the supposition or demand that they depend, by some law or rule (e.g., by a mathematical functio ...
but different sets of
independent variables are to be considered, for essentially the same set of data-points.
*Computations for analyses that occur in a sequence, as the number of data-points increases.
*Special considerations for very extensive data-sets.
Fitting of linear models by least squares often, but not always, arise in the context of
statistical analysis
Statistical inference is the process of using data analysis to infer properties of an underlying probability distribution.Upton, G., Cook, I. (2008) ''Oxford Dictionary of Statistics'', OUP. . Inferential statistical analysis infers properties of ...
. It can therefore be important that considerations of computation efficiency for such problems extend to all of the auxiliary quantities required for such analyses, and are not restricted to the formal solution of the linear least squares problem.
Matrix calculations, like any other, are affected by
rounding error
In computing, a roundoff error, also called rounding error, is the difference between the result produced by a given algorithm using exact arithmetic and the result produced by the same algorithm using finite-precision, rounded arithmetic. Roun ...
s. An early summary of these effects, regarding the choice of computation methods for matrix inversion, was provided by Wilkinson.
[Wilkinson, J.H. (1963) "Chapter 3: Matrix Computations", ''Rounding Errors in Algebraic Processes'', London: Her Majesty's Stationery Office (National Physical Laboratory, Notes in Applied Science, No.32)]
See also
*
Numerical linear algebra
Numerical linear algebra, sometimes called applied linear algebra, is the study of how matrix operations can be used to create computer algorithms which efficiently and accurately provide approximate answers to questions in continuous mathemati ...
*
Numerical methods for non-linear least squares
References
Further reading
*Ake Björck(1996), ''Numerical Methods for Least Squares Problems'', SIAM.
*R. W. Farebrother, ''Linear Least Squares Computations'', CRC Press, 1988.
*
*
*
*
*{{Citation
, last=National Physical Laboratory
, chapter=Chapter 2: Linear Equations and Matrices: Direct Methods on Automatic Computers
, title=Modern Computing Methods
, edition=2nd
, series=Notes on Applied Science
, volume=16
, publisher=Her Majesty's Stationery Office
, publication-date=1961
Least squares
The method of least squares is a mathematical optimization technique that aims to determine the best fit function by minimizing the sum of the squares of the differences between the observed values and the predicted values of the model. The me ...
Least squares