In
statistics, ordinary least squares (OLS) is a type of
linear least squares method for choosing the unknown
parameters in a
linear regression
In statistics, linear regression is a linear approach for modelling the relationship between a scalar response and one or more explanatory variables (also known as dependent and independent variables). The case of one explanatory variable is ...
model (with fixed level-one effects of a
linear function
In mathematics, the term linear function refers to two distinct but related notions:
* In calculus and related areas, a linear function is a function whose graph is a straight line, that is, a polynomial function of degree zero or one. For di ...
of a set of
explanatory variables) by the principle of
least squares
The method of least squares is a standard approach in regression analysis to approximate the solution of overdetermined systems (sets of equations in which there are more equations than unknowns) by minimizing the sum of the squares of the r ...
: minimizing the sum of the squares of the differences between the observed
dependent variable
Dependent and independent variables are variables in mathematical modeling, statistical modeling and experimental sciences. Dependent variables receive this name because, in an experiment, their values are studied under the supposition or dema ...
(values of the variable being observed) in the input
dataset and the output of the (linear) function of the
independent variable.
Geometrically, this is seen as the sum of the squared distances, parallel to the axis of the dependent variable, between each data point in the set and the corresponding point on the regression surface—the smaller the differences, the better the model fits the data. The resulting
estimator can be expressed by a simple formula, especially in the case of a
simple linear regression
In statistics, simple linear regression is a linear regression model with a single explanatory variable. That is, it concerns two-dimensional sample points with one independent variable and one dependent variable (conventionally, the ''x'' an ...
, in which there is a single
regressor on the right side of the regression equation.
The OLS estimator is
consistent for the level-one fixed effects when the regressors are
exogenous and forms perfect colinearity (rank condition), consistent for the variance estimate of the residuals when regressors have finite fourth moments and—by the
Gauss–Markov theorem—
optimal in the class of linear unbiased estimators when the
error
An error (from the Latin ''error'', meaning "wandering") is an action which is inaccurate or incorrect. In some usages, an error is synonymous with a mistake. The etymology derives from the Latin term 'errare', meaning 'to stray'.
In statistic ...
s are
homoscedastic and
serially uncorrelated. Under these conditions, the method of OLS provides
minimum-variance mean-unbiased estimation when the errors have finite
variance
In probability theory and statistics, variance is the expectation of the squared deviation of a random variable from its population mean or sample mean. Variance is a measure of dispersion, meaning it is a measure of how far a set of number ...
s. Under the additional assumption that the errors are
normally distributed with zero mean, OLS is the
maximum likelihood estimator
In statistics, maximum likelihood estimation (MLE) is a method of estimating the parameters of an assumed probability distribution, given some observed data. This is achieved by maximizing a likelihood function so that, under the assumed stati ...
that outperforms any non-linear unbiased estimator.
Linear model

Suppose the data consists of
observations . Each observation
includes a scalar response
and a column vector
of
parameters (regressors), i.e.,
. In a
linear regression model, the response variable,
, is a linear function of the regressors:
:
or in
vector form,
:
where
, as introduced previously, is a column vector of the
-th observation of all the explanatory variables;
is a
vector of unknown parameters; and the scalar
represents unobserved random variables (
errors) of the
-th observation.
accounts for the influences upon the responses
from sources other than the explanators
. This model can also be written in matrix notation as
:
where
and
are
vectors of the response variables and the errors of the
observations, and
is an
matrix of regressors, also sometimes called the
design matrix, whose row
is
and contains the
-th observations on all the explanatory variables.
As a rule, the constant term is always included in the set of regressors
, say, by taking
for all
. The coefficient
corresponding to this regressor is called the ''intercept''.
Regressors do not have to be independent: there can be any desired relationship between the regressors (so long as it is not a linear relationship). For instance, we might suspect the response depends linearly both on a value and its square; in which case we would include one regressor whose value is just the square of another regressor. In that case, the model would be ''quadratic'' in the second regressor, but none-the-less is still considered a ''linear'' model because the model ''is'' still linear in the parameters (
).
Matrix/vector formulation
Consider an
overdetermined system
:
of
linear equations in
unknown
coefficients
In mathematics, a coefficient is a multiplicative factor in some term of a polynomial, a series, or an expression; it is usually a number, but may be any expression (including variables such as , and ). When the coefficients are themselves ...
,
, with
. (Note: for a linear model as above, not all elements in
contains information on the data points. The first column is populated with ones,
. Only the other columns contain actual data. So here
is equal to the number of regressors plus one.) This can be written in
matrix
Matrix most commonly refers to:
* ''The Matrix'' (franchise), an American media franchise
** '' The Matrix'', a 1999 science-fiction action film
** "The Matrix", a fictional setting, a virtual reality environment, within ''The Matrix'' (franchi ...
form as
:
where
:
Such a system usually has no exact solution, so the goal is instead to find the coefficients
which fit the equations "best", in the sense of solving the
quadratic
In mathematics, the term quadratic describes something that pertains to squares, to the operation of squaring, to terms of the second degree, or equations or formulas that involve such terms. ''Quadratus'' is Latin for ''square''.
Mathematics ...
minimization problem
:
where the objective function
is given by
:
A justification for choosing this criterion is given in
Properties below. This minimization problem has a unique solution, provided that the
columns of the matrix
are
linearly independent
In the theory of vector spaces, a set of vectors is said to be if there is a nontrivial linear combination of the vectors that equals the zero vector. If no such linear combination exists, then the vectors are said to be . These concepts ...
, given by solving the so-called ''normal equations'':
:
The matrix
is known as the ''normal matrix'' or
Gram matrix and the matrix
is known as the
moment matrix of regressand by regressors. Finally,
is the coefficient vector of the least-squares
hyperplane
In geometry, a hyperplane is a subspace whose dimension is one less than that of its '' ambient space''. For example, if a space is 3-dimensional then its hyperplanes are the 2-dimensional planes, while if the space is 2-dimensional, its hype ...
, expressed as
:
or
:
Estimation
Suppose ''b'' is a "candidate" value for the parameter vector ''β''. The quantity , called the
residual for the ''i''-th observation, measures the vertical distance between the data point and the hyperplane , and thus assesses the degree of fit between the actual data and the model. The sum of squared residuals (SSR) (also called the error sum of squares (ESS) or residual sum of squares (RSS)) is a measure of the overall model fit:
:
where ''T'' denotes the matrix
transpose
In linear algebra, the transpose of a matrix is an operator which flips a matrix over its diagonal;
that is, it switches the row and column indices of the matrix by producing another matrix, often denoted by (among other notations).
The tr ...
, and the rows of ''X'', denoting the values of all the independent variables associated with a particular value of the dependent variable, are ''X
i = x
i''
T. The value of ''b'' which minimizes this sum is called the OLS estimator for ''β''. The function ''S''(''b'') is quadratic in ''b'' with positive-definite
Hessian, and therefore this function possesses a unique global minimum at
, which can be given by the explicit formula:
">roof/sup>
:
The product ''N''=''X''T ''X'' is a Gram matrix and its inverse, ''Q''=''N''–1, is the ''cofactor matrix'' of ''β'', closely related to its covariance matrix, ''C''''β''.
The matrix (''X''T ''X'')–1 ''X''T=''Q'' ''X''T is called the Moore–Penrose pseudoinverse matrix of X. This formulation highlights the point that estimation can be carried out if, and only if, there is no perfect multicollinearity between the explanatory variables (which would cause the gram matrix to have no inverse).
After we have estimated ''β'', the fitted values (or predicted values) from the regression will be
:
where ''P'' = ''X''(''X''T''X'')−1''X''T is the projection matrix onto the space ''V'' spanned by the columns of ''X''. This matrix ''P'' is also sometimes called the hat matrix because it "puts a hat" onto the variable ''y''. Another matrix, closely related to ''P'' is the ''annihilator'' matrix ; this is a projection matrix onto the space orthogonal to ''V''. Both matrices ''P'' and ''M'' are symmetric
Symmetry (from grc, συμμετρία "agreement in dimensions, due proportion, arrangement") in everyday language refers to a sense of harmonious and beautiful proportion and balance. In mathematics, "symmetry" has a more precise definit ...
and idempotent (meaning that and ), and relate to the data matrix ''X'' via identities and . Matrix ''M'' creates the residuals from the regression:
:
Using these residuals we can estimate the value of ''σ'' 2 using the reduced chi-squared statistic:
:
The denominator, ''n''−''p'', is the statistical degrees of freedom. The first quantity, ''s''2, is the OLS estimate for ''σ''2, whereas the second, , is the MLE estimate for ''σ''2. The two estimators are quite similar in large samples; the first estimator is always unbiased, while the second estimator is biased but has a smaller mean squared error
In statistics, the mean squared error (MSE) or mean squared deviation (MSD) of an estimator (of a procedure for estimating an unobserved quantity) measures the average of the squares of the errors—that is, the average squared difference betwe ...
. In practice ''s''2 is used more often, since it is more convenient for the hypothesis testing. The square root of ''s''2 is called the regression standard error, standard error of the regression, or standard error of the equation.
It is common to assess the goodness-of-fit of the OLS regression by comparing how much the initial variation in the sample can be reduced by regressing onto ''X''. The coefficient of determination ''R''2 is defined as a ratio of "explained" variance to the "total" variance of the dependent variable ''y'', in the cases where the regression sum of squares equals the sum of squares of residuals:
:
where TSS is the total sum of squares for the dependent variable, , and is an ''n''×''n'' matrix of ones. ( is a centering matrix which is equivalent to regression on a constant; it simply subtracts the mean from a variable.) In order for ''R''2 to be meaningful, the matrix ''X'' of data on regressors must contain a column vector of ones to represent the constant whose coefficient is the regression intercept. In that case, ''R''2 will always be a number between 0 and 1, with values close to 1 indicating a good degree of fit.
The variance in the prediction of the independent variable as a function of the dependent variable is given in the article Polynomial least squares
In statistics, polynomial regression is a form of regression analysis in which the relationship between the independent variable ''x'' and the dependent variable ''y'' is modelled as an ''n''th degree polynomial in ''x''. Polynomial regression fi ...
.
Simple linear regression model
If the data matrix ''X'' contains only two variables, a constant and a scalar regressor ''xi'', then this is called the "simple regression model". This case is often considered in the beginner statistics classes, as it provides much simpler formulas even suitable for manual calculation. The parameters are commonly denoted as :
:
The least squares estimates in this case are given by simple formulas
:
Alternative derivations
In the previous section the least squares estimator was obtained as a value that minimizes the sum of squared residuals of the model. However it is also possible to derive the same estimator from other approaches. In all cases the formula for OLS estimator remains the same: ; the only difference is in how we interpret this result.
Projection
For mathematicians, OLS is an approximate solution to an overdetermined system of linear equations , where ''β'' is the unknown. Assuming the system cannot be solved exactly (the number of equations ''n'' is much larger than the number of unknowns ''p''), we are looking for a solution that could provide the smallest discrepancy between the right- and left- hand sides. In other words, we are looking for the solution that satisfies
:
where is the standard ''L''2 norm in the ''n''-dimensional Euclidean space
Euclidean space is the fundamental space of geometry, intended to represent physical space. Originally, that is, in Euclid's ''Elements'', it was the three-dimensional space of Euclidean geometry, but in modern mathematics there are Euclidean sp ...
R''n''. The predicted quantity ''Xβ'' is just a certain linear combination of the vectors of regressors. Thus, the residual vector will have the smallest length when ''y'' is projected orthogonally onto the linear subspace spanned by the columns of ''X''. The OLS estimator in this case can be interpreted as the coefficients of vector decomposition of along the basis of ''X''.
In other words, the gradient equations at the minimum can be written as:
:
A geometrical interpretation of these equations is that the vector of residuals, is orthogonal to the column space
In linear algebra, the column space (also called the range or image) of a matrix ''A'' is the span (set of all possible linear combinations) of its column vectors. The column space of a matrix is the image or range of the corresponding mat ...
of ''X'', since the dot product is equal to zero for ''any'' conformal vector, v. This means that is the shortest of all possible vectors , that is, the variance of the residuals is the minimum possible. This is illustrated at the right.
Introducing and a matrix ''K'' with the assumption that a matrix