HOME

TheInfoList



OR:

In
statistics Statistics (from German language, German: ', "description of a State (polity), state, a country") is the discipline that concerns the collection, organization, analysis, interpretation, and presentation of data. In applying statistics to a s ...
, simple linear regression (SLR) is a
linear regression In statistics, linear regression is a statistical model, model that estimates the relationship between a Scalar (mathematics), scalar response (dependent variable) and one or more explanatory variables (regressor or independent variable). A mode ...
model with a single
explanatory variable A variable is considered dependent if it depends on (or is hypothesized to depend on) an independent variable. Dependent variables are studied under the supposition or demand that they depend, by some law or rule (e.g., by a mathematical function ...
. That is, it concerns two-dimensional sample points with one independent variable and one dependent variable (conventionally, the ''x'' and ''y'' coordinates in a
Cartesian coordinate system In geometry, a Cartesian coordinate system (, ) in a plane (geometry), plane is a coordinate system that specifies each point (geometry), point uniquely by a pair of real numbers called ''coordinates'', which are the positive and negative number ...
) and finds a linear function (a non-vertical straight line) that, as accurately as possible, predicts the dependent variable values as a function of the independent variable. The adjective ''simple'' refers to the fact that the outcome variable is related to a single predictor. It is common to make the additional stipulation that the
ordinary least squares In statistics, ordinary least squares (OLS) is a type of linear least squares method for choosing the unknown parameters in a linear regression In statistics, linear regression is a statistical model, model that estimates the relationship ...
(OLS) method should be used: the accuracy of each predicted value is measured by its squared '' residual'' (vertical distance between the point of the data set and the fitted line), and the goal is to make the sum of these squared deviations as small as possible. In this case, the slope of the fitted line is equal to the
correlation In statistics, correlation or dependence is any statistical relationship, whether causal or not, between two random variables or bivariate data. Although in the broadest sense, "correlation" may indicate any type of association, in statistics ...
between and corrected by the ratio of standard deviations of these variables. The intercept of the fitted line is such that the line passes through the center of mass of the data points.


Formulation and computation

Consider the
model A model is an informative representation of an object, person, or system. The term originally denoted the plans of a building in late 16th-century English, and derived via French and Italian ultimately from Latin , . Models can be divided in ...
function : y = \alpha + \beta x, which describes a line with slope and -intercept . In general, such a relationship may not hold exactly for the largely unobserved population of values of the independent and dependent variables; we call the unobserved deviations from the above equation the errors. Suppose we observe data pairs and call them . We can describe the underlying relationship between and involving this error term by : y_i = \alpha + \beta x_i + \varepsilon_i. This relationship between the true (but unobserved) underlying parameters and and the data points is called a linear regression model. The goal is to find estimated values \widehat\alpha and \widehat\beta for the parameters and which would provide the "best" fit in some sense for the data points. As mentioned in the introduction, in this article the "best" fit will be understood as in the least-squares approach: a line that minimizes the sum of squared residuals (see also
Errors and residuals In statistics and optimization, errors and residuals are two closely related and easily confused measures of the deviation of an observed value of an element of a statistical sample from its "true value" (not necessarily observable). The erro ...
) \widehat\varepsilon_i (differences between actual and predicted values of the dependent variable ''y''), each of which is given by, for any candidate parameter values \alpha and \beta, :\widehat\varepsilon_i =y_i-\alpha -\beta x_i. In other words, \widehat\alpha and \widehat\beta solve the following minimization problem: : (\hat\alpha,\, \hat\beta) = \operatorname\left( Q(\alpha, \beta) \right), where the
objective function In mathematical optimization and decision theory, a loss function or cost function (sometimes also called an error function) is a function that maps an event or values of one or more variables onto a real number intuitively representing some "cost ...
is: : Q(\alpha, \beta) = \sum_^n\widehat\varepsilon_i^ = \sum_^n (y_i -\alpha - \beta x_i)^2\ . By expanding to get a quadratic expression in \alpha and \beta, we can derive minimizing values of the function arguments, denoted \widehat and \widehat: \begin \widehat\alpha & = \bar - ( \widehat\beta\,\bar), \\ pt \widehat\beta &= \frac = \frac \end Here we have introduced


Expanded formulas

The above equations are efficient to use if the mean of the x and y variables (\bar \text \bar) are known. If the means are not known at the time of calculation, it may be more efficient to use the expanded version of the \widehat\alpha\text\widehat\beta equations. These expanded equations may be derived from the more general polynomial regression equations by defining the regression polynomial to be of order 1, as follows. \begin n & \sum_^nx_i \\ \sum_^nx_i & \sum_^nx_i^ \end \begin \widehat\alpha \\ \widehat\beta \end = \begin \sum_^ny_i \\ \sum_^ny_ix_i \end The above
system of linear equations In mathematics, a system of linear equations (or linear system) is a collection of two or more linear equations involving the same variable (math), variables. For example, : \begin 3x+2y-z=1\\ 2x-2y+4z=-2\\ -x+\fracy-z=0 \end is a system of th ...
may be solved directly, or stand-alone equations for \widehat\alpha\text\widehat\beta may be derived by expanding the matrix equations above. The resultant equations are algebraically equivalent to the ones shown in the prior paragraph, and are shown below without proof. \begin &\qquad \widehat\alpha = \frac \\ \\&\qquad \widehat\beta = \frac \\ &\qquad \end


Interpretation


Relationship with the sample covariance matrix

The solution can be reformulated using elements of the
covariance matrix In probability theory and statistics, a covariance matrix (also known as auto-covariance matrix, dispersion matrix, variance matrix, or variance–covariance matrix) is a square matrix giving the covariance between each pair of elements of ...
: \widehat\beta = \frac = r_ \frac where Substituting the above expressions for \widehat and \widehat into the original solution yields : \frac = r_ \frac . This shows that is the slope of the regression line of the
standardized Standardization (American English) or standardisation (British English) is the process of implementing and developing technical standards based on the consensus of different parties that include firms, users, interest groups, standards organiza ...
data points (and that this line passes through the origin). Since -1 \leq r_ \leq 1 then we get that if x is some measurement and y is a followup measurement from the same item, then we expect that y (on average) will be closer to the mean measurement than it was to the original value of x. This phenomenon is known as regressions toward the mean. Generalizing the \bar x notation, we can write a horizontal bar over an expression to indicate the average value of that expression over the set of samples. For example: :\overline = \frac \sum_^n x_i y_i. This notation allows us a concise formula for : :r_ = \frac . The coefficient of determination ("R squared") is equal to r_^2 when the model is linear with a single independent variable. See sample correlation coefficient for additional details.


Interpretation about the slope

By multiplying all members of the summation in the numerator by : \begin\frac = 1\end (thereby not changing it): : \begin \widehat\beta &= \frac = \frac = \sum_^n \frac \frac \\ pt \end We can see that the slope (tangent of angle) of the regression line is the weighted average of \frac that is the slope (tangent of angle) of the line that connects the i-th point to the average of all points, weighted by (x_i - \bar)^2 because the further the point is the more "important" it is, since small errors in its position will affect the slope connecting it to the center point more.


Interpretation about the intercept

: \begin \widehat\alpha & = \bar - \widehat\beta\,\bar, \\ pt\end Given \widehat\beta = \tan(\theta) = dy / dx \rightarrow dy = dx\times\widehat\beta with \theta the angle the line makes with the positive x axis, we have y_ = \bar - dx\times\widehat\beta = \bar - dy


Interpretation about the correlation

In the above formulation, notice that each x_i is a constant ("known upfront") value, while the y_i are random variables that depend on the linear function of x_i and the random term \varepsilon_i. This assumption is used when deriving the standard error of the slope and showing that it is
unbiased Bias is a disproportionate weight ''in favor of'' or ''against'' an idea or thing, usually in a way that is inaccurate, closed-minded, prejudicial, or unfair. Biases can be innate or learned. People may develop biases for or against an individ ...
. In this framing, when x_i is not actually a
random variable A random variable (also called random quantity, aleatory variable, or stochastic variable) is a Mathematics, mathematical formalization of a quantity or object which depends on randomness, random events. The term 'random variable' in its mathema ...
, what type of parameter does the empirical correlation r_ estimate? The issue is that for each value i we'll have: E(x_i)=x_i and Var(x_i)=0. A possible interpretation of r_ is to imagine that x_i defines a random variable drawn from the empirical distribution of the x values in our sample. For example, if x had 10 values from the
natural numbers In mathematics, the natural numbers are the numbers 0, 1, 2, 3, and so on, possibly excluding 0. Some start counting with 0, defining the natural numbers as the non-negative integers , while others start with 1, defining them as the positiv ...
: ,2,3...,10 then we can imagine x to be a
Discrete uniform distribution In probability theory and statistics, the discrete uniform distribution is a symmetric probability distribution wherein each of some finite whole number ''n'' of outcome values are equally likely to be observed. Thus every one of the ''n'' out ...
. Under this interpretation all x_i have the same expectation and some positive variance. With this interpretation we can think of r_ as the estimator of the Pearson's correlation between the random variable y and the random variable x (as we just defined it).


Numerical properties


Statistical properties

Description of the statistical properties of estimators from the simple linear regression estimates requires the use of a
statistical model A statistical model is a mathematical model that embodies a set of statistical assumptions concerning the generation of Sample (statistics), sample data (and similar data from a larger Statistical population, population). A statistical model repre ...
. The following is based on assuming the validity of a model under which the estimates are optimal. It is also possible to evaluate the properties under other assumptions, such as inhomogeneity, but this is discussed elsewhere.


Unbiasedness

The estimators \widehat and \widehat are
unbiased Bias is a disproportionate weight ''in favor of'' or ''against'' an idea or thing, usually in a way that is inaccurate, closed-minded, prejudicial, or unfair. Biases can be innate or learned. People may develop biases for or against an individ ...
. To formalize this assertion we must define a framework in which these estimators are random variables. We consider the residuals as random variables drawn independently from some distribution with mean zero. In other words, for each value of , the corresponding value of is generated as a mean response plus an additional random variable called the ''error term'', equal to zero on average. Under such interpretation, the least-squares estimators \widehat\alpha and \widehat\beta will themselves be random variables whose means will equal the "true values" and . This is the definition of an unbiased estimator.


Variance of the mean response

Since the data in this context is defined to be (''x'', ''y'') pairs for every observation, the ''mean response'' at a given value of ''x'', say ''xd'', is an estimate of the mean of the ''y'' values in the population at the ''x'' value of ''xd'', that is \hat(y \mid x_d) \equiv\hat_d\!. The variance of the mean response is given by: : \operatorname\left(\hat + \hatx_d\right) = \operatorname\left(\hat\right) + \left(\operatorname \hat\right)x_d^2 + 2 x_d \operatorname \left(\hat, \hat \right) . This expression can be simplified to :\operatorname\left(\hat + \hatx_d\right) =\sigma^2\left(\frac + \frac\right), where ''m'' is the number of data points. To demonstrate this simplification, one can make use of the identity : \sum (x_i - \bar)^2 = \sum x_i^2 - \frac 1 m \left(\sum x_i\right)^2 .


Variance of the predicted response

The ''predicted response'' distribution is the predicted distribution of the residuals at the given point ''xd''. So the variance is given by : \begin \operatorname\left(y_d - \left hat + \hat x_d \right\right) &= \operatorname (y_d) + \operatorname \left(\hat + \hatx_d\right) - 2\operatorname\left(y_d,\left hat + \hat x_d \rightright)\\ &= \operatorname (y_d) + \operatorname \left(\hat + \hatx_d\right). \end The second line follows from the fact that \operatorname\left(y_d,\left hat + \hat x_d \rightright) is zero because the new prediction point is independent of the data used to fit the model. Additionally, the term \operatorname \left(\hat + \hatx_d\right) was calculated earlier for the mean response. Since \operatorname(y_d)=\sigma^2 (a fixed but unknown parameter that can be estimated), the variance of the predicted response is given by : \begin \operatorname\left(y_d - \left hat + \hat x_d \right\right) & = \sigma^2 + \sigma^2\left(\frac 1 m + \frac\right)\\ pt& = \sigma^2\left(1 + \frac 1 m + \frac\right). \end


Confidence intervals

The formulas given in the previous section allow one to calculate the ''point estimates'' of and — that is, the coefficients of the regression line for the given set of data. However, those formulas do not tell us how precise the estimates are, i.e., how much the estimators \widehat and \widehat vary from sample to sample for the specified sample size. Confidence intervals were devised to give a plausible set of values to the estimates one might have if one repeated the experiment a very large number of times. The standard method of constructing confidence intervals for linear regression coefficients relies on the normality assumption, which is justified if either: # the errors in the regression are
normally distributed In probability theory and statistics, a normal distribution or Gaussian distribution is a type of continuous probability distribution for a real number, real-valued random variable. The general form of its probability density function is f(x ...
(the so-called ''classic regression'' assumption), or # the number of observations is sufficiently large, in which case the estimator is approximately normally distributed. The latter case is justified by the
central limit theorem In probability theory, the central limit theorem (CLT) states that, under appropriate conditions, the Probability distribution, distribution of a normalized version of the sample mean converges to a Normal distribution#Standard normal distributi ...
.


Normality assumption

Under the first assumption above, that of the normality of the error terms, the estimator of the slope coefficient will itself be normally distributed with mean and variance \sigma^2\left/\sum(x_i - \bar)^2\right., where is the variance of the error terms (see Proofs involving ordinary least squares). At the same time the sum of squared residuals is distributed proportionally to with degrees of freedom, and independently from \widehat. This allows us to construct a -value : t = \frac\ \sim\ t_, where : s_\widehat = \sqrt is the unbiased ''standard error'' estimator of the estimator \widehat. This -value has a Student's -distribution with degrees of freedom. Using it we can construct a confidence interval for : : \beta \in \left widehat\beta - s_ t^*_,\ \widehat\beta + s_ t^*_\right at confidence level , where t^*_ is the \scriptstyle \left(1 \;-\; \frac\right)\text quantile of the distribution. For example, if then the confidence level is 95%. Similarly, the confidence interval for the intercept coefficient is given by : \alpha \in \left \widehat\alpha - s_ t^*_,\ \widehat\alpha + s_\widehat t^*_\right at confidence level (1 − ''γ''), where : s_ = s_\widehat\sqrt = \sqrt The confidence intervals for and give us the general idea where these regression coefficients are most likely to be. For example, in the
Okun's law In economics, Okun's law is an Empirical research, empirically observed relationship between unemployment and losses in a country's production. It is named after Arthur Melvin Okun, who first proposed the relationship in 1962. The "gap version" s ...
regression shown here the point estimates are : \widehat = 0.859, \qquad \widehat = -1.817. The 95% confidence intervals for these estimates are : \alpha \in \left ,0.76, 0.96\right \qquad \beta \in \left 2.06, -1.58 \,\right In order to represent this information graphically, in the form of the confidence bands around the regression line, one has to proceed carefully and account for the joint distribution of the estimators. It can be shownCasella, G. and Berger, R. L. (2002), "Statistical Inference" (2nd Edition), Cengage, , pp. 558–559. that at confidence level (1 − ''γ'') the confidence band has hyperbolic form given by the equation : (\alpha + \beta \xi) \in \left \,\widehat + \widehat \xi \pm t^*_ \sqrt\,\right When the model assumed the intercept is fixed and equal to 0 (\alpha = 0), the standard error of the slope turns into: : s_\widehat = \sqrt With: \hat_i = y_i - \hat y_i


Asymptotic assumption

The alternative second assumption states that when the number of points in the dataset is "large enough", the
law of large numbers In probability theory, the law of large numbers is a mathematical law that states that the average of the results obtained from a large number of independent random samples converges to the true value, if it exists. More formally, the law o ...
and the
central limit theorem In probability theory, the central limit theorem (CLT) states that, under appropriate conditions, the Probability distribution, distribution of a normalized version of the sample mean converges to a Normal distribution#Standard normal distributi ...
become applicable, and then the distribution of the estimators is approximately normal. Under this assumption all formulas derived in the previous section remain valid, with the only exception that the quantile ''t*''''n''−2 of Student's ''t'' distribution is replaced with the quantile ''q*'' of the
standard normal distribution In probability theory and statistics, a normal distribution or Gaussian distribution is a type of continuous probability distribution for a real-valued random variable. The general form of its probability density function is f(x) = \frac e^ ...
. Occasionally the fraction is replaced with . When is large such a change does not alter the results appreciably.


Numerical example

This data set gives average masses for women as a function of their height in a sample of American women of age 30–39. Although the OLS article argues that it would be more appropriate to run a quadratic regression for this data, the simple linear regression model is applied here instead. : There are ''n'' = 15 points in this data set. Hand calculations would be started by finding the following five sums: : \begin S_ &= \sum x_i \, = 24.76, \qquad S_ = \sum y_i \, = 931.17, \\ ptS_ &= \sum x_i^2 = 41.0532, \;\;\, S_ = \sum y_i^2 = 58498.5439, \\ ptS_ &= \sum x_iy_i = 1548.2453 \end These quantities would be used to calculate the estimates of the regression coefficients, and their standard errors. : \begin \widehat\beta &= \frac = 61.272 \\ pt \widehat\alpha &= \fracS_y - \widehat \fracS_x = -39.062 \\ pt s_\varepsilon^2 &= \frac \left nS_ - S_y^2 - \widehat\beta^2(nS_ - S_x^2) \right= 0.5762 \\ pt s_\widehat^2 &= \frac = 3.1539 \\ pt s_\widehat^2 &= s_\widehat^2 \frac S_ = 8.63185 \end The 0.975 quantile of Student's ''t''-distribution with 13 degrees of freedom is , and thus the 95% confidence intervals for and are : \begin & \alpha \in ,\widehat \mp t^*_ s_\widehat \,= ,,\ \,\\ pt & \beta \in ,\widehat \mp t^*_ s_\widehat \,= , 57.4,\ 65.1 \,\end The product-moment correlation coefficient might also be calculated: : \widehat = \frac = 0.9946


Alternatives

In SLR, there is an underlying assumption that only the dependent variable contains measurement error; if the explanatory variable is also measured with error, then simple regression is not appropriate for estimating the underlying relationship because it will be biased due to
regression dilution Regression dilution, also known as regression attenuation, is the biasing of the linear regression slope towards zero (the underestimation of its absolute value), caused by errors in the independent variable. Consider fitting a straight line ...
. Other estimation methods that can be used in place of ordinary least squares include
least absolute deviations Least absolute deviations (LAD), also known as least absolute errors (LAE), least absolute residuals (LAR), or least absolute values (LAV), is a statistical optimality criterion and a statistical optimization technique based on minimizing the su ...
(minimizing the sum of absolute values of residuals) and the Theil–Sen estimator (which chooses a line whose
slope In mathematics, the slope or gradient of a Line (mathematics), line is a number that describes the direction (geometry), direction of the line on a plane (geometry), plane. Often denoted by the letter ''m'', slope is calculated as the ratio of t ...
is the
median The median of a set of numbers is the value separating the higher half from the lower half of a Sample (statistics), data sample, a statistical population, population, or a probability distribution. For a data set, it may be thought of as the “ ...
of the slopes determined by pairs of sample points). Deming regression (total least squares) also finds a line that fits a set of two-dimensional sample points, but (unlike ordinary least squares, least absolute deviations, and median slope regression) it is not really an instance of simple linear regression, because it does not separate the coordinates into one dependent and one independent variable and could potentially return a vertical line as its fit. can lead to a model that attempts to fit the outliers more than the data.


Line fitting


Simple linear regression without the intercept term (single regressor)

Sometimes it is appropriate to force the regression line to pass through the origin, because and are assumed to be proportional. For the model without the intercept term, , the OLS estimator for simplifies to : \widehat = \frac = \frac Substituting in place of gives the regression through : : \begin \widehat\beta &= \frac = \frac \\ pt &= \frac \\ pt &= \frac \\ pt &= \frac, \end where Cov and Var refer to the covariance and variance of the sample data (uncorrected for bias). The last form above demonstrates how moving the line away from the center of mass of the data points affects the slope.


See also

* Design matrix#Simple linear regression * Linear trend estimation * Linear segmented regression * Proofs involving ordinary least squares—derivation of all formulas used in this article in general multidimensional case * Newey–West estimator


References


External links


Wolfram MathWorld's explanation of Least Squares Fitting, and how to calculate it


zh-yue:簡單線性迴歸分析 {{DEFAULTSORT:Simple Linear Regression Curve fitting Regression analysis Parametric statistics