In
statistics, simple linear regression is a
linear regression
In statistics, linear regression is a linear approach for modelling the relationship between a scalar response and one or more explanatory variables (also known as dependent and independent variables). The case of one explanatory variable is ...
model with a single
explanatory variable.
That is, it concerns two-dimensional sample points with
one independent variable and one dependent variable (conventionally, the ''x'' and ''y'' coordinates in a
Cartesian coordinate system
A Cartesian coordinate system (, ) in a plane is a coordinate system that specifies each point uniquely by a pair of numerical coordinates, which are the signed distances to the point from two fixed perpendicular oriented lines, measured ...
) and finds a linear function (a non-vertical
straight line) that, as accurately as possible, predicts the dependent variable values as a function of the independent variable.
The adjective ''simple'' refers to the fact that the outcome variable is related to a single predictor.
It is common to make the additional stipulation that the
ordinary least squares (OLS) method should be used: the accuracy of each predicted value is measured by its squared ''
residual'' (vertical distance between the point of the data set and the fitted line), and the goal is to make the sum of these squared deviations as small as possible. Other regression methods that can be used in place of ordinary least squares include
least absolute deviations (minimizing the sum of absolute values of residuals) and the
Theil–Sen estimator (which chooses a line whose
slope
In mathematics, the slope or gradient of a line is a number that describes both the ''direction'' and the ''steepness'' of the line. Slope is often denoted by the letter ''m''; there is no clear answer to the question why the letter ''m'' is used ...
is the
median of the slopes determined by pairs of sample points).
Deming regression (total least squares) also finds a line that fits a set of two-dimensional sample points, but (unlike ordinary least squares, least absolute deviations, and median slope regression) it is not really an instance of simple linear regression, because it does not separate the coordinates into one dependent and one independent variable and could potentially return a vertical line as its fit.
The remainder of the article assumes an ordinary least squares regression.
In this case, the slope of the fitted line is equal to the
correlation between and corrected by the ratio of standard deviations of these variables. The intercept of the fitted line is such that the line passes through the center of mass of the data points.
Fitting the regression line
Consider the
model function
:
which describes a line with slope and -intercept . In general such a relationship may not hold exactly for the largely unobserved population of values of the independent and dependent variables; we call the unobserved deviations from the above equation the
errors. Suppose we observe data pairs and call them . We can describe the underlying relationship between and involving this error term by
:
This relationship between the true (but unobserved) underlying parameters and and the data points is called a linear regression model.
The goal is to find estimated values
and
for the parameters and which would provide the "best" fit in some sense for the data points. As mentioned in the introduction, in this article the "best" fit will be understood as in the
least-squares approach: a line that minimizes the
sum of squared residuals
In statistics, the residual sum of squares (RSS), also known as the sum of squared estimate of errors (SSE), is the sum of the squares of residuals (deviations predicted from actual empirical values of data). It is a measure of the discrepanc ...
(see also
Errors and residuals
In statistics and optimization, errors and residuals are two closely related and easily confused measures of the deviation of an observed value of an element of a statistical sample from its " true value" (not necessarily observable). The ...
)
(differences between actual and predicted values of the dependent variable ''y''), each of which is given by, for any candidate parameter values
and
,
:
In other words,
and
solve the following minimization problem:
:
By expanding to get a quadratic expression in
and
we can derive values of
and
that minimize the objective function (these minimizing values are denoted
and
):
:
Here we have introduced
Substituting the above expressions for
and
into
:
yields
:
This shows that is the slope of the regression line of the
standardized data points (and that this line passes through the origin). Since
then we get that if x is some measurement and y is a followup measurement from the same item, then we expect that y (on average) will be closer to the mean measurement than it was to the original value of x. This phenomenon is known as
regressions toward the mean.
Generalizing the
notation, we can write a horizontal bar over an expression to indicate the average value of that expression over the set of samples. For example:
:
This notation allows us a concise formula for :
:
The
coefficient of determination ("R squared") is equal to
when the model is linear with a single independent variable. See
sample correlation coefficient for additional details.
Intuition about the slope
By multiplying all members of the summation in the numerator by :
(thereby not changing it):
:
We can see that the slope (tangent of angle) of the regression line is the weighted average of
that is the slope (tangent of angle) of the line that connects the i-th point to the average of all points, weighted by
because the further the point is the more "important" it is, since small errors in its position will affect the slope connecting it to the center point more.
Intuition about the intercept
:
Given
with
the angle the line makes with the positive x axis,
we have
Intuition about the correlation
In the above formulation, notice that each
is a constant ("known upfront") value, while the
are random variables that depend on the linear function of
and the random term
. This assumption is used when deriving the standard error of the slope and showing that it is
unbiased.
In this framing, when
is not actually a
random variable
A random variable (also called random quantity, aleatory variable, or stochastic variable) is a mathematical formalization of a quantity or object which depends on random events. It is a mapping or a function from possible outcomes (e.g., the p ...
, what type of parameter does the empirical correlation
estimate? The issue is that for each value i we'll have:
and
. A possible interpretation of
is to imagine that
defines a random variable drawn from the
empirical distribution of the x values in our sample. For example, if x had 10 values from the
natural numbers:
,2,3...,10 then we can imagine x to be a
Discrete uniform distribution
In probability theory and statistics, the discrete uniform distribution is a symmetric probability distribution wherein a finite number of values are equally likely to be observed; every one of ''n'' values has equal probability 1/''n''. Ano ...
. Under this interpretation all
have the same expectation and some positive variance. With this interpretation we can think of
as the estimator of the
Pearson's correlation
In statistics, the Pearson correlation coefficient (PCC, pronounced ) ― also known as Pearson's ''r'', the Pearson product-moment correlation coefficient (PPMCC), the bivariate correlation, or colloquially simply as the correlation coefficient ...
between the random variable y and the random variable x (as we just defined it).
Simple linear regression without the intercept term (single regressor)
Sometimes it is appropriate to force the regression line to pass through the origin, because and are assumed to be proportional. For the model without the intercept term, , the OLS estimator for simplifies to
:
Substituting in place of gives the regression through :
:
where Cov and Var refer to the covariance and variance of the sample data (uncorrected for bias).
The last form above demonstrates how moving the line away from the center of mass of the data points affects the slope.
Numerical properties
Model-based properties
Description of the statistical properties of estimators from the simple linear regression estimates requires the use of a
statistical model. The following is based on assuming the validity of a model under which the estimates are optimal. It is also possible to evaluate the properties under other assumptions, such as
inhomogeneity, but this is discussed elsewhere.
Unbiasedness
The estimators
and
are
unbiased.
To formalize this assertion we must define a framework in which these estimators are random variables. We consider the residuals as random variables drawn independently from some distribution with mean zero. In other words, for each value of , the corresponding value of is generated as a mean response plus an additional random variable called the ''error term'', equal to zero on average. Under such interpretation, the least-squares estimators
and
will themselves be random variables whose means will equal the "true values" and . This is the definition of an unbiased estimator.
Confidence intervals
The formulas given in the previous section allow one to calculate the ''point estimates'' of and — that is, the coefficients of the regression line for the given set of data. However, those formulas don't tell us how precise the estimates are, i.e., how much the estimators
and
vary from sample to sample for the specified sample size.
Confidence interval
In frequentist statistics, a confidence interval (CI) is a range of estimates for an unknown parameter. A confidence interval is computed at a designated ''confidence level''; the 95% confidence level is most common, but other levels, such as ...
s were devised to give a plausible set of values to the estimates one might have if one repeated the experiment a very large number of times.
The standard method of constructing confidence intervals for linear regression coefficients relies on the normality assumption, which is justified if either:
# the errors in the regression are
normally distributed (the so-called ''classic regression'' assumption), or
# the number of observations is sufficiently large, in which case the estimator is approximately normally distributed.
The latter case is justified by the
central limit theorem
In probability theory, the central limit theorem (CLT) establishes that, in many situations, when independent random variables are summed up, their properly normalized sum tends toward a normal distribution even if the original variables thems ...
.
Normality assumption
Under the first assumption above, that of the normality of the error terms, the estimator of the slope coefficient will itself be normally distributed with mean and variance
where is the variance of the error terms (see
Proofs involving ordinary least squares). At the same time the sum of squared residuals is distributed proportionally to with degrees of freedom, and independently from
. This allows us to construct a -value
:
where
:
is the ''standard error'' of the estimator
.
This -value has a
Student's -distribution with degrees of freedom. Using it we can construct a confidence interval for :
:
at confidence level , where
is the
quantile of the distribution. For example, if then the confidence level is 95%.
Similarly, the confidence interval for the intercept coefficient is given by
:
at confidence level (1 − ''γ''), where
:
The confidence intervals for and give us the general idea where these regression coefficients are most likely to be. For example, in the
Okun's law regression shown here the point estimates are
:
The 95% confidence intervals for these estimates are
:
In order to represent this information graphically, in the form of the confidence bands around the regression line, one has to proceed carefully and account for the joint distribution of the estimators. It can be shown
[Casella, G. and Berger, R. L. (2002), "Statistical Inference" (2nd Edition), Cengage, , pp. 558–559.] that at confidence level (1 − ''γ'') the confidence band has hyperbolic form given by the equation
:
When the model assumed the intercept is fixed and equal to 0 (
), the standard error of the slope turns into:
:
With:
Asymptotic assumption
The alternative second assumption states that when the number of points in the dataset is "large enough", the
law of large numbers
In probability theory, the law of large numbers (LLN) is a theorem that describes the result of performing the same experiment a large number of times. According to the law, the average of the results obtained from a large number of trials sho ...
and the
central limit theorem
In probability theory, the central limit theorem (CLT) establishes that, in many situations, when independent random variables are summed up, their properly normalized sum tends toward a normal distribution even if the original variables thems ...
become applicable, and then the distribution of the estimators is approximately normal. Under this assumption all formulas derived in the previous section remain valid, with the only exception that the quantile ''t*''
''n''−2 of
Student's ''t'' distribution is replaced with the quantile ''q*'' of the
standard normal distribution
In statistics, a normal distribution or Gaussian distribution is a type of continuous probability distribution for a real-valued random variable. The general form of its probability density function is
:
f(x) = \frac e^
The parameter \mu i ...
. Occasionally the fraction is replaced with . When is large such a change does not alter the results appreciably.
Numerical example
This data set gives average masses for women as a function of their height in a sample of American women of age 30–39. Although the
OLS
OLS or Ols may refer to:
* Oleśnica (German: Öls), Poland
* Optical landing system
* Order of Luthuli in Silver, a South African honour
* Ordinary least squares, a method used in regression analysis for estimating linear models
* Ottawa Linux Sy ...
article argues that it would be more appropriate to run a quadratic regression for this data, the simple linear regression model is applied here instead.
:
There are ''n'' = 15 points in this data set. Hand calculations would be started by finding the following five sums:
:
These quantities would be used to calculate the estimates of the regression coefficients, and their standard errors.
:

The 0.975 quantile of Student's ''t''-distribution with 13 degrees of freedom is , and thus the 95% confidence intervals for and are
:
The
product-moment correlation coefficient might also be calculated:
:
See also
*
Design matrix#Simple linear regression
*
Line fitting
*
Linear trend estimation
*
Linear segmented regression
*
Proofs involving ordinary least squares—derivation of all formulas used in this article in general multidimensional case
References
External links
Wolfram MathWorld's explanation of Least Squares Fitting, and how to calculate it
zh-yue:簡單線性迴歸分析
{{DEFAULTSORT:Simple Linear Regression
Curve fitting
Regression analysis
Parametric statistics