Regression Error
   HOME

TheInfoList



OR:

In
statistics Statistics (from German language, German: ', "description of a State (polity), state, a country") is the discipline that concerns the collection, organization, analysis, interpretation, and presentation of data. In applying statistics to a s ...
and
optimization Mathematical optimization (alternatively spelled ''optimisation'') or mathematical programming is the selection of a best element, with regard to some criteria, from some set of available alternatives. It is generally divided into two subfiel ...
, errors and residuals are two closely related and easily confused measures of the deviation of an observed value of an
element Element or elements may refer to: Science * Chemical element, a pure substance of one type of atom * Heating element, a device that generates heat by electrical resistance * Orbital elements, parameters required to identify a specific orbit of o ...
of a
statistical sample In this statistics, quality assurance, and survey methodology, sampling is the selection of a subset or a statistical sample (termed sample for short) of individuals from within a statistical population to estimate characteristics of the whole ...
from its "
true value The True Value Company is an American wholesaler and Hardware store brand. The corporate headquarters are located in Chicago. Historically True Value was a cooperative owned by retailers, but in 2018 it was purchased by ACON Investments. In Oc ...
" (not necessarily observable). The error of an
observation Observation in the natural sciences is an act or instance of noticing or perceiving and the acquisition of information from a primary source. In living beings, observation employs the senses. In science, observation can also involve the percep ...
is the deviation of the observed value from the true value of a quantity of interest (for example, a
population mean In statistics, a population is a set of similar items or events which is of interest for some question or experiment. A statistical population can be a group of existing objects (e.g. the set of all stars within the Milky Way galaxy) or a hyp ...
). The residual is the difference between the observed value and the ''
estimated Estimation (or estimating) is the process of finding an estimate or approximation, which is a value that is usable for some purpose even if input data may be incomplete, uncertain, or unstable. The value is nonetheless usable because it is de ...
'' value of the quantity of interest (for example, a
sample mean The sample mean (sample average) or empirical mean (empirical average), and the sample covariance or empirical covariance are statistics computed from a sample of data on one or more random variables. The sample mean is the average value (or me ...
). The distinction is most important in regression analysis, where the concepts are sometimes called the regression errors and regression residuals and where they lead to the concept of
studentized residual In statistics, a studentized residual is the dimensionless ratio resulting from the division of a errors and residuals in statistics, residual by an estimator, estimate of its standard deviation, both expressed in the same Unit of measurement, ...
s. In
econometrics Econometrics is an application of statistical methods to economic data in order to give empirical content to economic relationships. M. Hashem Pesaran (1987). "Econometrics", '' The New Palgrave: A Dictionary of Economics'', v. 2, p. 8 p. 8 ...
, "errors" are also called disturbances.


Introduction

Suppose there is a series of observations from a
univariate distribution In statistics, a univariate distribution is a probability distribution of only one random variable. This is in contrast to a multivariate distribution, the probability distribution of a random vector (consisting of multiple random variables). Exam ...
and we want to estimate the
mean A mean is a quantity representing the "center" of a collection of numbers and is intermediate to the extreme values of the set of numbers. There are several kinds of means (or "measures of central tendency") in mathematics, especially in statist ...
of that distribution (the so-called
location model In economics, a location model or spatial model is any monopolistic competition model that demonstrates consumer preference for particular brands of goods and their locations. Examples of location models include Hotelling's Location Model, Sa ...
). In this case, the errors are the deviations of the observations from the population mean, while the residuals are the deviations of the observations from the sample mean. A statistical error (or disturbance) is the amount by which an observation differs from its
expected value In probability theory, the expected value (also called expectation, expectancy, expectation operator, mathematical expectation, mean, expectation value, or first Moment (mathematics), moment) is a generalization of the weighted average. Informa ...
, the latter being based on the whole
population Population is a set of humans or other organisms in a given region or area. Governments conduct a census to quantify the resident population size within a given jurisdiction. The term is also applied to non-human animals, microorganisms, and pl ...
from which the statistical unit was chosen randomly. For example, if the mean height in a population of 21-year-old men is 1.75 meters, and one randomly chosen man is 1.80 meters tall, then the "error" is 0.05 meters; if the randomly chosen man is 1.70 meters tall, then the "error" is −0.05 meters. The expected value, being the
mean A mean is a quantity representing the "center" of a collection of numbers and is intermediate to the extreme values of the set of numbers. There are several kinds of means (or "measures of central tendency") in mathematics, especially in statist ...
of the entire population, is typically unobservable, and hence the statistical error cannot be observed either. A residual (or fitting deviation), on the other hand, is an observable ''estimate'' of the unobservable statistical error. Consider the previous example with men's heights and suppose we have a random sample of ''n'' people. The ''
sample mean The sample mean (sample average) or empirical mean (empirical average), and the sample covariance or empirical covariance are statistics computed from a sample of data on one or more random variables. The sample mean is the average value (or me ...
'' could serve as a good estimator of the ''population'' mean. Then we have: * The difference between the height of each man in the sample and the unobservable ''population'' mean is a ''statistical error'', whereas * The difference between the height of each man in the sample and the observable ''sample'' mean is a ''residual''. Note that, because of the definition of the sample mean, the sum of the residuals within a random sample is necessarily zero, and thus the residuals are necessarily ''not
independent Independent or Independents may refer to: Arts, entertainment, and media Artist groups * Independents (artist group), a group of modernist painters based in Pennsylvania, United States * Independentes (English: Independents), a Portuguese artist ...
''. The statistical errors, on the other hand, are independent, and their sum within the random sample is
almost surely In probability theory, an event is said to happen almost surely (sometimes abbreviated as a.s.) if it happens with probability 1 (with respect to the probability measure). In other words, the set of outcomes on which the event does not occur ha ...
not zero. One can standardize statistical errors (especially of a
normal distribution In probability theory and statistics, a normal distribution or Gaussian distribution is a type of continuous probability distribution for a real-valued random variable. The general form of its probability density function is f(x) = \frac ...
) in a
z-score In statistics, the standard score or ''z''-score is the number of standard deviations by which the value of a raw score (i.e., an observed value or data point) is above or below the mean value of what is being observed or measured. Raw scores ...
(or "standard score"), and standardize residuals in a ''t''-statistic, or more generally
studentized residuals In statistics, a studentized residual is the dimensionless ratio resulting from the division of a residual by an estimate of its standard deviation, both expressed in the same units. It is a form of a Student's ''t''-statistic, with the esti ...
.


In univariate distributions

If we assume a normally distributed population with mean μ and
standard deviation In statistics, the standard deviation is a measure of the amount of variation of the values of a variable about its Expected value, mean. A low standard Deviation (statistics), deviation indicates that the values tend to be close to the mean ( ...
σ, and choose individuals independently, then we have :X_1, \dots, X_n \sim N\left(\mu, \sigma^2\right)\, and the
sample mean The sample mean (sample average) or empirical mean (empirical average), and the sample covariance or empirical covariance are statistics computed from a sample of data on one or more random variables. The sample mean is the average value (or me ...
:\overline= is a random variable distributed such that: :\overline \sim N \left(\mu, \frac n \right). The ''statistical errors'' are then :e_i = X_i - \mu,\, with expected values of zero, whereas the ''residuals'' are :r_i = X_i - \overline. The sum of squares of the statistical errors, divided by ''σ''2, has a
chi-squared distribution In probability theory and statistics, the \chi^2-distribution with k Degrees of freedom (statistics), degrees of freedom is the distribution of a sum of the squares of k Independence (probability theory), independent standard normal random vari ...
with ''n''
degrees of freedom In many scientific fields, the degrees of freedom of a system is the number of parameters of the system that may vary independently. For example, a point in the plane has two degrees of freedom for translation: its two coordinates; a non-infinite ...
: : \frac 1 \sum_^n e_i^2\sim\chi^2_n. However, this quantity is not observable as the population mean is unknown. The sum of squares of the residuals, on the other hand, is observable. The quotient of that sum by σ2 has a chi-squared distribution with only ''n'' − 1 degrees of freedom: : \frac 1 \sum_^n r_i^2 \sim \chi^2_. This difference between ''n'' and ''n'' − 1 degrees of freedom results in
Bessel's correction In statistics, Bessel's correction is the use of ''n'' − 1 instead of ''n'' in the formula for the sample variance and sample standard deviation, where ''n'' is the number of observations in a sample. This method corrects the bias in ...
for the estimation of
sample variance In probability theory and statistics, variance is the expected value of the squared deviation from the mean of a random variable. The standard deviation (SD) is obtained as the square root of the variance. Variance is a measure of dispersion, ...
of a population with unknown mean and unknown variance. No correction is necessary if the population mean is known.


Remark

It is remarkable that the sum of squares of the residuals and the sample mean can be shown to be independent of each other, using, e.g.
Basu's theorem In statistics, Basu's theorem states that any boundedly complete and sufficient statistic is independent of any ancillary statistic. This is a 1955 result of Debabrata Basu. It is often used in statistics as a tool to prove independence of two s ...
. That fact, and the normal and chi-squared distributions given above form the basis of calculations involving the t-statistic: : T = \frac, where \overline_n - \mu_0 represents the errors, S_n represents the sample standard deviation for a sample of size ''n'', and unknown ''σ'', and the denominator term S_n/\sqrt n accounts for the standard deviation of the errors according to: \operatorname\left(\overline_n\right) = \frac n The
probability distribution In probability theory and statistics, a probability distribution is a Function (mathematics), function that gives the probabilities of occurrence of possible events for an Experiment (probability theory), experiment. It is a mathematical descri ...
s of the numerator and the denominator separately depend on the value of the unobservable population standard deviation ''σ'', but ''σ'' appears in both the numerator and the denominator and cancels. That is fortunate because it means that even though we do not know ''σ'', we know the probability distribution of this quotient: it has a
Student's t-distribution In probability theory and statistics, Student's  distribution (or simply the  distribution) t_\nu is a continuous probability distribution that generalizes the Normal distribution#Standard normal distribution, standard normal distribu ...
with ''n'' − 1 degrees of freedom. We can therefore use this quotient to find a confidence interval for ''μ''. This t-statistic can be interpreted as "the number of standard errors away from the regression line."


Regressions

In regression analysis, the distinction between ''errors'' and ''residuals'' is subtle and important, and leads to the concept of
studentized residual In statistics, a studentized residual is the dimensionless ratio resulting from the division of a errors and residuals in statistics, residual by an estimator, estimate of its standard deviation, both expressed in the same Unit of measurement, ...
s. Given an unobservable function that relates the independent variable to the dependent variable – say, a line – the deviations of the dependent variable observations from this function are the unobservable errors. If one runs a regression on some data, then the deviations of the dependent variable observations from the ''fitted'' function are the residuals. If the linear model is applicable, a scatterplot of residuals plotted against the independent variable should be random about zero with no trend to the residuals. If the data exhibit a trend, the regression model is likely incorrect; for example, the true function may be a quadratic or higher order polynomial. If they are random, or have no trend, but "fan out" - they exhibit a phenomenon called
heteroscedasticity In statistics, a sequence of random variables is homoscedastic () if all its random variables have the same finite variance; this is also known as homogeneity of variance. The complementary notion is called heteroscedasticity, also known as hete ...
. If all of the residuals are equal, or do not fan out, they exhibit
homoscedasticity In statistics, a sequence of random variables is homoscedastic () if all its random variables have the same finite variance; this is also known as homogeneity of variance. The complementary notion is called heteroscedasticity, also known as hete ...
. However, a terminological difference arises in the expression
mean squared error In statistics, the mean squared error (MSE) or mean squared deviation (MSD) of an estimator (of a procedure for estimating an unobserved quantity) measures the average of the squares of the errors—that is, the average squared difference betwee ...
(MSE). The mean squared error of a regression is a number computed from the sum of squares of the computed ''residuals'', and not of the unobservable ''errors''. If that sum of squares is divided by ''n'', the number of observations, the result is the mean of the squared residuals. Since this is a biased estimate of the variance of the unobserved errors, the bias is removed by dividing the sum of the squared residuals by ''df'' = ''n'' − ''p'' − 1, instead of ''n'', where ''df'' is the number of
degrees of freedom In many scientific fields, the degrees of freedom of a system is the number of parameters of the system that may vary independently. For example, a point in the plane has two degrees of freedom for translation: its two coordinates; a non-infinite ...
(''n'' minus the number of parameters (excluding the intercept) p being estimated - 1). This forms an unbiased estimate of the variance of the unobserved errors, and is called the mean squared error. Another method to calculate the mean square of error when analyzing the variance of linear regression using a technique like that used in
ANOVA Analysis of variance (ANOVA) is a family of statistical methods used to compare the means of two or more groups by analyzing variance. Specifically, ANOVA compares the amount of variation ''between'' the group means to the amount of variation ''w ...
(they are the same because ANOVA is a type of regression), the sum of squares of the residuals (aka sum of squares of the error) is divided by the degrees of freedom (where the degrees of freedom equal ''n'' − ''p'' − 1, where ''p'' is the number of parameters estimated in the model (one for each variable in the regression equation, not including the intercept)). One can then also calculate the mean square of the model by dividing the sum of squares of the model minus the degrees of freedom, which is just the number of parameters. Then the F value can be calculated by dividing the mean square of the model by the mean square of the error, and we can then determine significance (which is why you want the mean squares to begin with.). However, because of the behavior of the process of regression, the ''distributions'' of residuals at different data points (of the input variable) may vary ''even if'' the errors themselves are identically distributed. Concretely, in a
linear regression In statistics, linear regression is a statistical model, model that estimates the relationship between a Scalar (mathematics), scalar response (dependent variable) and one or more explanatory variables (regressor or independent variable). A mode ...
where the errors are identically distributed, the variability of residuals of inputs in the middle of the domain will be ''higher'' than the variability of residuals at the ends of the domain: linear regressions fit endpoints better than the middle. This is also reflected in the influence functions of various data points on the
regression coefficient In statistics, linear regression is a model that estimates the relationship between a scalar response (dependent variable) and one or more explanatory variables (regressor or independent variable). A model with exactly one explanatory variable ...
s: endpoints have more influence. Thus to compare residuals at different inputs, one needs to adjust the residuals by the expected variability of ''residuals,'' which is called studentizing. This is particularly important in the case of detecting
outliers In statistics, an outlier is a data point that differs significantly from other observations. An outlier may be due to a variability in the measurement, an indication of novel data, or it may be the result of experimental error; the latter ar ...
, where the case in question is somehow different from the others in a dataset. For example, a large residual may be expected in the middle of the domain, but considered an outlier at the end of the domain.


Other uses of the word "error" in statistics

The use of the term "error" as discussed in the sections above is in the sense of a deviation of a value from a hypothetical unobserved value. At least two other uses also occur in statistics, both referring to observable
prediction error In statistics the mean squared prediction error (MSPE), also known as mean squared error of the predictions, of a smoothing, curve fitting, or regression procedure is the expected value of the squared prediction errors (PE), the square differenc ...
s: The ''
mean squared error In statistics, the mean squared error (MSE) or mean squared deviation (MSD) of an estimator (of a procedure for estimating an unobserved quantity) measures the average of the squares of the errors—that is, the average squared difference betwee ...
'' (MSE) refers to the amount by which the values predicted by an estimator differ from the quantities being estimated (typically outside the sample from which the model was estimated). The ''
root mean square error The root mean square deviation (RMSD) or root mean square error (RMSE) is either one of two closely related and frequently used measures of the differences between true or predicted values on the one hand and observed values or an estimator on th ...
'' (RMSE) is the square root of MSE. The ''sum of squares of errors'' (SSE) is the MSE multiplied by the sample size. ''
Sum of squares of residuals In statistics, the residual sum of squares (RSS), also known as the sum of squared residuals (SSR) or the sum of squared estimate of errors (SSE), is the sum of the squares of residuals (deviations predicted from actual empirical values of dat ...
'' (SSR) is the sum of the squares of the deviations of the actual values from the predicted values, within the sample used for estimation. This is the basis for the
least squares The method of least squares is a mathematical optimization technique that aims to determine the best fit function by minimizing the sum of the squares of the differences between the observed values and the predicted values of the model. The me ...
estimate, where the regression coefficients are chosen such that the SSR is minimal (i.e. its derivative is zero). Likewise, the ''
sum of absolute errors In statistics, mean absolute error (MAE) is a measure of errors between paired observations expressing the same phenomenon. Examples of ''Y'' versus ''X'' include comparisons of predicted versus observed, subsequent time versus initial time, an ...
'' (SAE) is the sum of the absolute values of the residuals, which is minimized in the
least absolute deviations Least absolute deviations (LAD), also known as least absolute errors (LAE), least absolute residuals (LAR), or least absolute values (LAV), is a statistical optimality criterion and a statistical optimization technique based on minimizing the su ...
approach to regression. The mean error (ME) is the bias. The ''mean residual'' (MR) is always zero for least-squares estimators.


See also

*
Absolute deviation In mathematics and statistics, deviation serves as a measure to quantify the disparity between an observed value of a variable and another designated value, frequently the mean of that variable. Deviations with respect to the sample mean and th ...
*
Consensus forecasts A consensus forecast is a prediction of the future created by combining several separate forecasts which have often been created using different methodologies. They are used in a number of sciences, ranging from econometrics to meteorology, and a ...
*
Error detection and correction In information theory and coding theory with applications in computer science and telecommunications, error detection and correction (EDAC) or error control are techniques that enable reliable delivery of digital data over unreliable communi ...
*
Explained sum of squares In statistics, the explained sum of squares (ESS), alternatively known as the model sum of squares or sum of squares due to regression (SSR – not to be confused with the residual sum of squares (RSS) or sum of squares of errors), is a quantity ...
*
Innovation (signal processing) In time series analysis (or forecasting) — as conducted in statistics, signal processing, and many other fields — the innovation is the difference between the observed value of a variable at time ''t'' and the optimal forecast of that value bas ...
*
Lack-of-fit sum of squares In statistics, a sum of squares due to lack of fit, or more tersely a lack-of-fit sum of squares, is one of the components of a partition of the sum of squares of residuals in an analysis of variance, used in the numerator in an F-test of the null ...
*
Margin of error The margin of error is a statistic expressing the amount of random sampling error in the results of a Statistical survey, survey. The larger the margin of error, the less confidence one should have that a poll result would reflect the result of ...
*
Mean absolute error In statistics, mean absolute error (MAE) is a measure of Error (statistics), errors between paired observations expressing the same phenomenon. Examples of ''Y'' versus ''X'' include comparisons of predicted versus observed, subsequent time vers ...
*
Observational error Observational error (or measurement error) is the difference between a measured value of a quantity and its unknown true value.Dodge, Y. (2003) ''The Oxford Dictionary of Statistical Terms'', OUP. Such errors are inherent in the measurement ...
*
Propagation of error In statistics, propagation of uncertainty (or propagation of error) is the effect of variables' uncertainties (or errors, more specifically random errors) on the uncertainty of a function based on them. When the variables are the values of expe ...
*
Probable error In statistics, probable error defines the half-range of an interval about a central point for the distribution, such that half of the values from the distribution will lie within the interval and half outside.Dodge, Y. (2006) ''The Oxford Dictiona ...
* Random and systematic errors *
Reduced chi-squared statistic In statistics, the reduced chi-square statistic is used extensively in goodness of fit testing. It is also known as mean squared weighted deviation (MSWD) in isotopic dating and variance of unit weight in the context of weighted least squares. ...
*
Regression dilution Regression dilution, also known as regression attenuation, is the biasing of the linear regression slope towards zero (the underestimation of its absolute value), caused by errors in the independent variable. Consider fitting a straight line ...
*
Sampling error In statistics, sampling errors are incurred when the statistical characteristics of a population are estimated from a subset, or sample, of that population. Since the sample does not include all members of the population, statistics of the sample ...
*
Standard error The standard error (SE) of a statistic (usually an estimator of a parameter, like the average or mean) is the standard deviation of its sampling distribution or an estimate of that standard deviation. In other words, it is the standard deviati ...
*
Studentized residual In statistics, a studentized residual is the dimensionless ratio resulting from the division of a errors and residuals in statistics, residual by an estimator, estimate of its standard deviation, both expressed in the same Unit of measurement, ...
*
Type I and type II errors Type I error, or a false positive, is the erroneous rejection of a true null hypothesis in statistical hypothesis testing. A type II error, or a false negative, is the erroneous failure in bringing about appropriate rejection of a false null hy ...


References


Further reading

* * * *


External links

* {{DEFAULTSORT:Errors And Residuals In Statistics * Statistical deviation and dispersion Regression analysis