In
statistics
Statistics (from German: '' Statistik'', "description of a state, a country") is the discipline that concerns the collection, organization, analysis, interpretation, and presentation of data. In applying statistics to a scientific, indust ...
, ordinary least squares (OLS) is a type of
linear least squares method for choosing the unknown
parameters in a
linear regression model (with fixed level-one effects of a
linear function of a set of
explanatory variable
Dependent and independent variables are variables in mathematical modeling, statistical modeling and experimental sciences. Dependent variables receive this name because, in an experiment, their values are studied under the supposition or deman ...
s) by the principle of
least squares: minimizing the sum of the squares of the differences between the observed
dependent variable (values of the variable being observed) in the input
dataset and the output of the (linear) function of the
independent variable.
Geometrically, this is seen as the sum of the squared distances, parallel to the axis of the dependent variable, between each data point in the set and the corresponding point on the regression surface—the smaller the differences, the better the model fits the data. The resulting
estimator can be expressed by a simple formula, especially in the case of a
simple linear regression, in which there is a single
regressor
Dependent and independent variables are variables in mathematical modeling, statistical modeling and experimental sciences. Dependent variables receive this name because, in an experiment, their values are studied under the supposition or deman ...
on the right side of the regression equation.
The OLS estimator is
consistent for the level-one fixed effects when the regressors are
exogenous
In a variety of contexts, exogeny or exogeneity () is the fact of an action or object originating externally. It contrasts with endogeneity or endogeny, the fact of being influenced within a system.
Economics
In an economic model, an exogen ...
and forms perfect colinearity (rank condition), consistent for the variance estimate of the residuals when regressors have finite fourth moments and—by the
Gauss–Markov theorem—
optimal in the class of linear unbiased estimators when the
errors are
homoscedastic and
serially uncorrelated. Under these conditions, the method of OLS provides
minimum-variance mean-unbiased estimation when the errors have finite
variances. Under the additional assumption that the errors are
normally distributed with zero mean, OLS is the
maximum likelihood estimator that outperforms any non-linear unbiased estimator.
Linear model
Suppose the data consists of
observations . Each observation
includes a scalar response
and a column vector
of
parameters (regressors), i.e.,
. In a
linear regression model, the response variable,
, is a linear function of the regressors:
:
or in
vector form,
:
where
, as introduced previously, is a column vector of the
-th observation of all the explanatory variables;
is a
vector of unknown parameters; and the scalar
represents unobserved random variables (
errors
An error (from the Latin ''error'', meaning "wandering") is an action which is inaccurate or incorrect. In some usages, an error is synonymous with a mistake. The etymology derives from the Latin term 'errare', meaning 'to stray'.
In statistics ...
) of the
-th observation.
accounts for the influences upon the responses
from sources other than the explanators
. This model can also be written in matrix notation as
:
where
and
are
vectors of the response variables and the errors of the
observations, and
is an
matrix of regressors, also sometimes called the
design matrix, whose row
is
and contains the
-th observations on all the explanatory variables.
As a rule, the constant term is always included in the set of regressors
, say, by taking
for all
. The coefficient
corresponding to this regressor is called the ''intercept''.
Regressors do not have to be independent: there can be any desired relationship between the regressors (so long as it is not a linear relationship). For instance, we might suspect the response depends linearly both on a value and its square; in which case we would include one regressor whose value is just the square of another regressor. In that case, the model would be ''quadratic'' in the second regressor, but none-the-less is still considered a ''linear'' model because the model ''is'' still linear in the parameters (
).
Matrix/vector formulation
Consider an
overdetermined system
:
of
linear equations in
unknown
coefficients,
, with
. (Note: for a linear model as above, not all elements in
contains information on the data points. The first column is populated with ones,
. Only the other columns contain actual data. So here
is equal to the number of regressors plus one.) This can be written in
matrix
Matrix most commonly refers to:
* ''The Matrix'' (franchise), an American media franchise
** '' The Matrix'', a 1999 science-fiction action film
** "The Matrix", a fictional setting, a virtual reality environment, within ''The Matrix'' (franchi ...
form as
:
where
:
Such a system usually has no exact solution, so the goal is instead to find the coefficients
which fit the equations "best", in the sense of solving the
quadratic minimization problem
:
where the objective function
is given by
:
A justification for choosing this criterion is given in
Properties below. This minimization problem has a unique solution, provided that the
columns of the matrix
are
linearly independent, given by solving the so-called ''normal equations'':
:
The matrix
is known as the ''normal matrix'' or
Gram matrix and the matrix
is known as the
moment matrix of regressand by regressors. Finally,
is the coefficient vector of the least-squares
hyperplane, expressed as
:
or
:
Estimation
Suppose ''b'' is a "candidate" value for the parameter vector ''β''. The quantity , called the
residual for the ''i''-th observation, measures the vertical distance between the data point and the hyperplane , and thus assesses the degree of fit between the actual data and the model. The sum of squared residuals (SSR) (also called the error sum of squares (ESS) or residual sum of squares (RSS)) is a measure of the overall model fit:
:
where ''T'' denotes the matrix
transpose, and the rows of ''X'', denoting the values of all the independent variables associated with a particular value of the dependent variable, are ''X
i = x
i''
T. The value of ''b'' which minimizes this sum is called the OLS estimator for ''β''. The function ''S''(''b'') is quadratic in ''b'' with positive-definite
Hessian, and therefore this function possesses a unique global minimum at
, which can be given by the explicit formula:
">roof/sup>
:
The product ''N''=''X''T ''X'' is a Gram matrix and its inverse, ''Q''=''N''–1, is the ''cofactor matrix'' of ''β'', closely related to its covariance matrix, ''C''''β''.
The matrix (''X''T ''X'')–1 ''X''T=''Q'' ''X''T is called the Moore–Penrose pseudoinverse matrix of X. This formulation highlights the point that estimation can be carried out if, and only if, there is no perfect multicollinearity between the explanatory variables (which would cause the gram matrix to have no inverse).
After we have estimated ''β'', the fitted values (or predicted values) from the regression will be
:
where ''P'' = ''X''(''X''T''X'')−1''X''T is the projection matrix onto the space ''V'' spanned by the columns of ''X''. This matrix ''P'' is also sometimes called the hat matrix
In statistics, the projection matrix (\mathbf), sometimes also called the influence matrix or hat matrix (\mathbf), maps the vector of response values (dependent variable values) to the vector of fitted values (or predicted values). It describes ...
because it "puts a hat" onto the variable ''y''. Another matrix, closely related to ''P'' is the ''annihilator'' matrix ; this is a projection matrix onto the space orthogonal to ''V''. Both matrices ''P'' and ''M'' are symmetric and idempotent (meaning that and ), and relate to the data matrix ''X'' via identities and . Matrix ''M'' creates the residuals from the regression:
:
Using these residuals we can estimate the value of ''σ'' 2 using the reduced chi-squared statistic:
:
The denominator, ''n''−''p'', is the statistical degrees of freedom. The first quantity, ''s''2, is the OLS estimate for ''σ''2, whereas the second, , is the MLE estimate for ''σ''2. The two estimators are quite similar in large samples; the first estimator is always unbiased, while the second estimator is biased but has a smaller mean squared error. In practice ''s''2 is used more often, since it is more convenient for the hypothesis testing. The square root of ''s''2 is called the regression standard error, standard error of the regression, or standard error of the equation.
It is common to assess the goodness-of-fit of the OLS regression by comparing how much the initial variation in the sample can be reduced by regressing onto ''X''. The coefficient of determination ''R''2 is defined as a ratio of "explained" variance to the "total" variance of the dependent variable ''y'', in the cases where the regression sum of squares equals the sum of squares of residuals:
:
where TSS is the total sum of squares
In statistical data analysis the total sum of squares (TSS or SST) is a quantity that appears as part of a standard way of presenting results of such analyses. For a set of observations, y_i, i\leq n, it is defined as the sum over all squared dif ...
for the dependent variable, , and is an ''n''×''n'' matrix of ones. ( is a centering matrix which is equivalent to regression on a constant; it simply subtracts the mean from a variable.) In order for ''R''2 to be meaningful, the matrix ''X'' of data on regressors must contain a column vector of ones to represent the constant whose coefficient is the regression intercept. In that case, ''R''2 will always be a number between 0 and 1, with values close to 1 indicating a good degree of fit.
The variance in the prediction of the independent variable as a function of the dependent variable is given in the article Polynomial least squares.
Simple linear regression model
If the data matrix ''X'' contains only two variables, a constant and a scalar regressor ''xi'', then this is called the "simple regression model". This case is often considered in the beginner statistics classes, as it provides much simpler formulas even suitable for manual calculation. The parameters are commonly denoted as :
:
The least squares estimates in this case are given by simple formulas
:
Alternative derivations
In the previous section the least squares estimator was obtained as a value that minimizes the sum of squared residuals of the model. However it is also possible to derive the same estimator from other approaches. In all cases the formula for OLS estimator remains the same: ; the only difference is in how we interpret this result.
Projection
For mathematicians, OLS is an approximate solution to an overdetermined system of linear equations , where ''β'' is the unknown. Assuming the system cannot be solved exactly (the number of equations ''n'' is much larger than the number of unknowns ''p''), we are looking for a solution that could provide the smallest discrepancy between the right- and left- hand sides. In other words, we are looking for the solution that satisfies
:
where is the standard ''L''2 norm in the ''n''-dimensional Euclidean space
Euclidean space is the fundamental space of geometry, intended to represent physical space. Originally, that is, in Euclid's ''Elements'', it was the three-dimensional space of Euclidean geometry, but in modern mathematics there are Euclidea ...
R''n''. The predicted quantity ''Xβ'' is just a certain linear combination of the vectors of regressors. Thus, the residual vector will have the smallest length when ''y'' is projected orthogonally onto the linear subspace spanned by the columns of ''X''. The OLS estimator in this case can be interpreted as the coefficients of vector decomposition
In mathematics, a set of vectors in a vector space is called a basis if every element of may be written in a unique way as a finite linear combination of elements of . The coefficients of this linear combination are referred to as component ...
of along the basis of ''X''.
In other words, the gradient equations at the minimum can be written as:
:
A geometrical interpretation of these equations is that the vector of residuals, is orthogonal to the column space
In linear algebra, the column space (also called the range or image) of a matrix ''A'' is the span (set of all possible linear combinations) of its column vectors. The column space of a matrix is the image or range of the corresponding mat ...
of ''X'', since the dot product is equal to zero for ''any'' conformal vector, v. This means that is the shortest of all possible vectors , that is, the variance of the residuals is the minimum possible. This is illustrated at the right.
Introducing and a matrix ''K'' with the assumption that a matrix