The purpose of this page is to provide supplementary materials for the
ordinary least squares
In statistics, ordinary least squares (OLS) is a type of linear least squares method for choosing the unknown parameters in a linear regression model (with fixed level-one effects of a linear function of a set of explanatory variables) by the prin ...
article, reducing the load of the main article with mathematics and improving its accessibility, while at the same time retaining the completeness of exposition.
Derivation of the normal equations
Define the
th residual to be
:
Then the objective
can be rewritten
:
Given that ''S'' is convex, it is
minimized when its gradient vector is zero (This follows by definition: if the gradient vector is not zero, there is a direction in which we can move to minimize it further – see
maxima and minima
In mathematical analysis, the maxima and minima (the respective plurals of maximum and minimum) of a function, known collectively as extrema (the plural of extremum), are the largest and smallest value of the function, either within a given ran ...
.) The elements of the gradient vector are the partial derivatives of ''S'' with respect to the parameters:
:
The derivatives are
:
Substitution of the expressions for the residuals and the derivatives into the gradient equations gives
:
Thus if
minimizes ''S'', we have
:
Upon rearrangement, we obtain the normal equations:
:
The normal equations are written in matrix notation as
:
(where ''X''
T is the
matrix transpose of ''X'').
The solution of the normal equations yields the vector
of the optimal parameter values.
Derivation directly in terms of matrices
The normal equations can be derived directly from a matrix representation of the problem as follows. The objective is to minimize
:
Here
has the dimension 1x1 (the number of columns of
), so it is a scalar and equal to its own transpose, hence
and the quantity to minimize becomes
:
Differentiating this with respect to
and equating to zero to satisfy the first-order conditions gives
:
which is equivalent to the above-given normal equations. A sufficient condition for satisfaction of the second-order conditions for a minimum is that
have full column rank, in which case
is
positive definite In mathematics, positive definiteness is a property of any object to which a bilinear form or a sesquilinear form may be naturally associated, which is positive-definite. See, in particular:
* Positive-definite bilinear form
* Positive-definite f ...
.
Derivation without calculus
When
is positive definite, the formula for the minimizing value of
can be derived without the use of derivatives. The quantity
:
can be written as
:
where
depends only on
and
, and
is the
inner product
In mathematics, an inner product space (or, rarely, a Hausdorff space, Hausdorff pre-Hilbert space) is a real vector space or a complex vector space with an operation (mathematics), operation called an inner product. The inner product of two ve ...
defined by
:
It follows that
is equal to
:
and therefore minimized exactly when
:
Generalization for complex equations
In general, the coefficients of the matrices
and
can be complex. By using a
Hermitian transpose instead of a simple transpose, it is possible to find a vector
which minimizes
, just as for the real matrix case. In order to get the normal equations we follow a similar path as in previous derivations:
:
where
stands for Hermitian transpose.
We should now take derivatives of
with respect to each of the coefficients
, but first we separate real and imaginary parts to deal with the conjugate factors in above expression. For the
we have
:
and the derivatives change into
:
After rewriting
in the summation form and writing
explicitly, we can calculate both partial derivatives with result:
:
which, after adding it together and comparing to zero (minimization condition for
) yields
:
In matrix form:
:
Least squares estimator for ''β''
Using matrix notation, the sum of squared residuals is given by
:
Since this is a quadratic expression, the vector which gives the global minimum may be found via
matrix calculus by differentiating with respect to the vector
(using denominator layout) and setting equal to zero:
:
By assumption matrix ''X'' has full column rank, and therefore ''X
TX'' is invertible and the least squares estimator for ''β'' is given by
:
Unbiasedness and variance of
Plug ''y'' = ''Xβ'' + ''ε'' into the formula for
and then use the
law of total expectation:
:
where E[''ε'', ''X''] = 0 by assumptions of the model. Since the expected value of
equals the parameter it estimates,
, it is an
unbiased estimator of
.
For the variance, let the covariance matrix of
be
(where
is the identity
matrix), and let X be a known constant.
Then,
:
where we used the fact that
is just an
affine transformation
In Euclidean geometry, an affine transformation or affinity (from the Latin, ''affinis'', "connected with") is a geometric transformation that preserves lines and parallelism, but not necessarily Euclidean distances and angles.
More generally, ...
of
by the matrix
.
For a simple linear regression model, where
(
is the ''y''-intercept and
is the slope), one obtains
:
Expected value and biasedness of
First we will plug in the expression for ''y'' into the estimator, and use the fact that ''X'M'' = ''MX'' = 0 (matrix ''M'' projects onto the space orthogonal to ''X''):
:
Now we can recognize ''ε''′''Mε'' as a 1×1 matrix, such matrix is equal to its own
trace
Trace may refer to:
Arts and entertainment Music
* Trace (Son Volt album), ''Trace'' (Son Volt album), 1995
* Trace (Died Pretty album), ''Trace'' (Died Pretty album), 1993
* Trace (band), a Dutch progressive rock band
* The Trace (album), ''The ...
. This is useful because by properties of trace operator, tr(''AB'') = tr(''BA''), and we can use this to separate disturbance ''ε'' from matrix ''M'' which is a function of regressors ''X'':
:
Using the
Law of iterated expectation this can be written as
:
Recall that ''M'' = ''I'' − ''P'' where ''P'' is the projection onto linear space spanned by columns of matrix ''X''. By properties of a
projection matrix, it has ''p'' = rank(''X'') eigenvalues equal to 1, and all other eigenvalues are equal to 0. Trace of a matrix is equal to the sum of its characteristic values, thus tr(''P'') = ''p'', and tr(''M'') = ''n'' − ''p''. Therefore,
:
Since the expected value of
does not equal the parameter it estimates,
, it is a
biased estimator
In statistics, the bias of an estimator (or bias function) is the difference between this estimator's expected value and the true value of the parameter being estimated. An estimator or decision rule with zero bias is called ''unbiased''. In st ...
of
. Note in the later section
“Maximum likelihood” we show that under the additional assumption that errors are distributed normally, the estimator
is proportional to a chi-squared distribution with ''n'' – ''p'' degrees of freedom, from which the formula for expected value would immediately follow. However the result we have shown in this section is valid regardless of the distribution of the errors, and thus has importance on its own.
Consistency and asymptotic normality of
Estimator
can be written as
:
We can use the
law of large numbers
In probability theory, the law of large numbers (LLN) is a theorem that describes the result of performing the same experiment a large number of times. According to the law, the average of the results obtained from a large number of trials shou ...
to establish that
:
By
Slutsky's theorem In probability theory, Slutsky’s theorem extends some properties of algebraic operations on convergent sequences of real numbers to sequences of random variables.
The theorem was named after Eugen Slutsky. Slutsky's theorem is also attributed to ...
and
continuous mapping theorem these results can be combined to establish consistency of estimator
:
:
The
central limit theorem tells us that
:
where
Applying
Slutsky's theorem In probability theory, Slutsky’s theorem extends some properties of algebraic operations on convergent sequences of real numbers to sequences of random variables.
The theorem was named after Eugen Slutsky. Slutsky's theorem is also attributed to ...
again we'll have
:
Maximum likelihood approach
Maximum likelihood estimation is a generic technique for estimating the unknown parameters in a statistical model by constructing a log-likelihood function corresponding to the joint distribution of the data, then maximizing this function over all possible parameter values. In order to apply this method, we have to make an assumption about the distribution of y given X so that the log-likelihood function can be constructed. The connection of maximum likelihood estimation to OLS arises when this distribution is modeled as a
multivariate normal
In probability theory and statistics, the multivariate normal distribution, multivariate Gaussian distribution, or joint normal distribution is a generalization of the one-dimensional (univariate) normal distribution to higher dimensions. One d ...
.
Specifically, assume that the errors ε have multivariate normal distribution with mean 0 and variance matrix ''σ''
2''I''. Then the distribution of ''y'' conditionally on ''X'' is
:
and the log-likelihood function of the data will be
:
Differentiating this expression with respect to ''β'' and ''σ''
2 we'll find the ML estimates of these parameters:
:
We can check that this is indeed a maximum by looking at the
Hessian matrix of the log-likelihood function.
Finite-sample distribution
Since we have assumed in this section that the distribution of error terms is known to be normal, it becomes possible to derive the explicit expressions for the distributions of estimators
and
:
:
so that by the
affine transformation properties of multivariate normal distribution
:
Similarly the distribution of
follows from
:
where
is the symmetric
projection matrix onto subspace orthogonal to ''X'', and thus ''MX'' = ''X''′''M'' = 0. We have argued
before
Before is the opposite of after, and may refer to:
* ''Before'' (Gold Panda EP), 2009
* ''Before'' (James Blake EP), 2020
* "Before" (song), a 1996 song by the Pet Shop Boys
* "Before", a song by the Empire of the Sun from ''Two Vines''
* "Befo ...
that this matrix rank ''n'' – ''p'', and thus by properties of
chi-squared distribution
In probability theory and statistics, the chi-squared distribution (also chi-square or \chi^2-distribution) with k degrees of freedom is the distribution of a sum of the squares of k independent standard normal random variables. The chi-squa ...
,
:
Moreover, the estimators
and
turn out to be
independent
Independent or Independents may refer to:
Arts, entertainment, and media Artist groups
* Independents (artist group), a group of modernist painters based in the New Hope, Pennsylvania, area of the United States during the early 1930s
* Independ ...
(conditional on ''X''), a fact which is fundamental for construction of the classical t- and F-tests. The independence can be easily seen from following: the estimator
represents coefficients of vector decomposition of
by the basis of columns of ''X'', as such
is a function of ''Pε''. At the same time, the estimator
is a norm of vector ''Mε'' divided by ''n'', and thus this estimator is a function of ''Mε''. Now, random variables (''Pε'', ''Mε'') are jointly normal as a linear transformation of ''ε'', and they are also uncorrelated because ''PM'' = 0. By properties of multivariate normal distribution, this means that ''Pε'' and ''Mε'' are independent, and therefore estimators
and
will be independent as well.
Derivation of simple linear regression estimators
We look for
and
that minimize the sum of squared errors (SSE):
:
To find a minimum take partial derivatives with respect to
and
:
Before taking partial derivative with respect to
, substitute the previous result for
:
Now, take the derivative with respect to
:
:
And finally substitute
to determine
:
Article proofs
Least squares