In
statistics
Statistics (from German language, German: ''wikt:Statistik#German, Statistik'', "description of a State (polity), state, a country") is the discipline that concerns the collection, organization, analysis, interpretation, and presentation of ...
and
optimization
Mathematical optimization (alternatively spelled ''optimisation'') or mathematical programming is the selection of a best element, with regard to some criterion, from some set of available alternatives. It is generally divided into two subfi ...
, errors and residuals are two closely related and easily confused measures of the
deviation of an
observed value
In probability and statistics, a realization, observation, or observed value, of a random variable is the value that is actually observed (what actually happened). The random variable itself is the process dictating how the observation comes abo ...
of an
element of a
statistical sample
In statistics, quality assurance, and survey methodology, sampling is the selection of a subset (a statistical sample) of individuals from within a statistical population to estimate characteristics of the whole population. Statisticians attempt ...
from its "
true value
In statistics, as opposed to its general use in mathematics, a parameter is any measured quantity of a statistical population that summarises or describes an aspect of the population, such as a mean or a standard deviation. If a population exa ...
" (not necessarily observable). The error of an
observation
Observation is the active acquisition of information from a primary source. In living beings, observation employs the senses. In science, observation can also involve the perception and recording of data via the use of scientific instruments. The ...
is the deviation of the observed value from the true value of a quantity of interest (for example, a
population mean
In statistics, a population is a set of similar items or events which is of interest for some question or experiment. A statistical population can be a group of existing objects (e.g. the set of all stars within the Milky Way galaxy) or a hypothe ...
). The residual is the difference between the observed value and the ''
estimated
Estimation (or estimating) is the process of finding an estimate or approximation, which is a value that is usable for some purpose even if input data may be incomplete, uncertain, or unstable. The value is nonetheless usable because it is der ...
'' value of the quantity of interest (for example, a
sample mean
The sample mean (or "empirical mean") and the sample covariance are statistics computed from a Sample (statistics), sample of data on one or more random variables.
The sample mean is the average value (or mean, mean value) of a sample (statistic ...
). The distinction is most important in
regression analysis
In statistical modeling, regression analysis is a set of statistical processes for estimating the relationships between a dependent variable (often called the 'outcome' or 'response' variable, or a 'label' in machine learning parlance) and one ...
, where the concepts are sometimes called the regression errors and regression residuals and where they lead to the concept of
studentized residual
In statistics, a studentized residual is the quotient resulting from the division of a residual by an estimate of its standard deviation. It is a form of a Student's ''t''-statistic, with the estimate of error varying between points.
This is ...
s.
In
econometrics
Econometrics is the application of Statistics, statistical methods to economic data in order to give Empirical evidence, empirical content to economic relationships.M. Hashem Pesaran (1987). "Econometrics," ''The New Palgrave: A Dictionary of ...
, "errors" are also called disturbances.
Introduction
Suppose there is a series of observations from a
univariate distribution In statistics, a univariate distribution is a probability distribution of only one random variable. This is in contrast to a multivariate distribution, the probability distribution of a random vector (consisting of multiple random variables).
Examp ...
and we want to estimate the
mean
There are several kinds of mean in mathematics, especially in statistics. Each mean serves to summarize a given group of data, often to better understand the overall value (magnitude and sign) of a given data set.
For a data set, the ''arithme ...
of that distribution (the so-called
location model
In economics, a location model or spatial model refers to any monopolistic competition model that demonstrates consumer preference for particular brands of goods and their locations. Examples of location models include Hotelling’s Location Mo ...
). In this case, the errors are the deviations of the observations from the population mean, while the residuals are the deviations of the observations from the sample mean.
A statistical error (or disturbance) is the amount by which an observation differs from its
expected value
In probability theory, the expected value (also called expectation, expectancy, mathematical expectation, mean, average, or first moment) is a generalization of the weighted average. Informally, the expected value is the arithmetic mean of a l ...
, the latter being based on the whole
population
Population typically refers to the number of people in a single area, whether it be a city or town, region, country, continent, or the world. Governments typically quantify the size of the resident population within their jurisdiction using a ...
from which the statistical unit was chosen randomly. For example, if the mean height in a population of 21-year-old men is 1.75 meters, and one randomly chosen man is 1.80 meters tall, then the "error" is 0.05 meters; if the randomly chosen man is 1.70 meters tall, then the "error" is −0.05 meters. The expected value, being the
mean
There are several kinds of mean in mathematics, especially in statistics. Each mean serves to summarize a given group of data, often to better understand the overall value (magnitude and sign) of a given data set.
For a data set, the ''arithme ...
of the entire population, is typically unobservable, and hence the statistical error cannot be observed either.
A residual (or fitting deviation), on the other hand, is an observable ''estimate'' of the unobservable statistical error. Consider the previous example with men's heights and suppose we have a random sample of ''n'' people. The ''
sample mean
The sample mean (or "empirical mean") and the sample covariance are statistics computed from a Sample (statistics), sample of data on one or more random variables.
The sample mean is the average value (or mean, mean value) of a sample (statistic ...
'' could serve as a good estimator of the ''population'' mean. Then we have:
* The difference between the height of each man in the sample and the unobservable ''population'' mean is a ''statistical error'', whereas
* The difference between the height of each man in the sample and the observable ''sample'' mean is a ''residual''.
Note that, because of the definition of the sample mean, the sum of the residuals within a random sample is necessarily zero, and thus the residuals are necessarily ''not
independent
Independent or Independents may refer to:
Arts, entertainment, and media Artist groups
* Independents (artist group), a group of modernist painters based in the New Hope, Pennsylvania, area of the United States during the early 1930s
* Independ ...
''. The statistical errors, on the other hand, are independent, and their sum within the random sample is
almost surely
In probability theory, an event is said to happen almost surely (sometimes abbreviated as a.s.) if it happens with probability 1 (or Lebesgue measure 1). In other words, the set of possible exceptions may be non-empty, but it has probability 0. ...
not zero.
One can standardize statistical errors (especially of a
normal distribution
In statistics, a normal distribution or Gaussian distribution is a type of continuous probability distribution for a real-valued random variable. The general form of its probability density function is
:
f(x) = \frac e^
The parameter \mu ...
) in a
z-score (or "standard score"), and standardize residuals in a
''t''-statistic, or more generally
studentized residuals
In statistics, a studentized residual is the quotient resulting from the division of a residual by an estimate of its standard deviation. It is a form of a Student's ''t''-statistic, with the estimate of error varying between points.
This is ...
.
In univariate distributions
If we assume a
normally distributed population with mean μ and
standard deviation
In statistics, the standard deviation is a measure of the amount of variation or dispersion of a set of values. A low standard deviation indicates that the values tend to be close to the mean (also called the expected value) of the set, while ...
σ, and choose individuals independently, then we have
:
and the
sample mean
:
is a random variable distributed such that:
:
The ''statistical errors'' are then
:
with
expected values of zero, whereas the ''residuals'' are
:
The sum of squares of the statistical errors, divided by ''σ''
2, has a
chi-squared distribution
In probability theory and statistics, the chi-squared distribution (also chi-square or \chi^2-distribution) with k degrees of freedom is the distribution of a sum of the squares of k independent standard normal random variables. The chi-squa ...
with ''n''
degrees of freedom
Degrees of freedom (often abbreviated df or DOF) refers to the number of independent variables or parameters of a thermodynamic system. In various scientific fields, the word "freedom" is used to describe the limits to which physical movement or ...
:
:
However, this quantity is not observable as the population mean is unknown. The sum of squares of the residuals, on the other hand, is observable. The quotient of that sum by σ
2 has a chi-squared distribution with only ''n'' − 1 degrees of freedom:
:
This difference between ''n'' and ''n'' − 1 degrees of freedom results in
Bessel's correction for the estimation of
sample variance
In probability theory and statistics, variance is the expectation of the squared deviation of a random variable from its population mean or sample mean. Variance is a measure of dispersion, meaning it is a measure of how far a set of numbers ...
of a population with unknown mean and unknown variance. No correction is necessary if the population mean is known.
Remark
It is remarkable that the
sum of squares of the residuals and the sample mean can be shown to be independent of each other, using, e.g.
Basu's theorem. That fact, and the normal and chi-squared distributions given above form the basis of calculations involving the
t-statistic:
:
where
represents the errors,
represents the sample standard deviation for a sample of size ''n'', and unknown ''σ'', and the denominator term
accounts for the standard deviation of the errors according to:
The probability distributions of the numerator and the denominator separately depend on the value of the unobservable population standard deviation ''σ'', but ''σ'' appears in both the numerator and the denominator and cancels. That is fortunate because it means that even though we do not know ''σ'', we know the probability distribution of this quotient: it has a
Student's t-distribution
In probability and statistics, Student's ''t''-distribution (or simply the ''t''-distribution) is any member of a family of continuous probability distributions that arise when estimating the mean of a normally distributed population in sit ...
with ''n'' − 1 degrees of freedom. We can therefore use this quotient to find a
confidence interval
In frequentist statistics, a confidence interval (CI) is a range of estimates for an unknown parameter. A confidence interval is computed at a designated ''confidence level''; the 95% confidence level is most common, but other levels, such as 9 ...
for ''μ''. This t-statistic can be interpreted as "the number of standard errors away from the regression line."
Regressions
In
regression analysis
In statistical modeling, regression analysis is a set of statistical processes for estimating the relationships between a dependent variable (often called the 'outcome' or 'response' variable, or a 'label' in machine learning parlance) and one ...
, the distinction between ''errors'' and ''residuals'' is subtle and important, and leads to the concept of
studentized residual
In statistics, a studentized residual is the quotient resulting from the division of a residual by an estimate of its standard deviation. It is a form of a Student's ''t''-statistic, with the estimate of error varying between points.
This is ...
s. Given an unobservable function that relates the independent variable to the dependent variable – say, a line – the deviations of the dependent variable observations from this function are the unobservable errors. If one runs a regression on some data, then the deviations of the dependent variable observations from the ''fitted'' function are the residuals. If the linear model is applicable, a scatterplot of residuals plotted against the independent variable should be random about zero with no trend to the residuals.
If the data exhibit a trend, the regression model is likely incorrect; for example, the true function may be a quadratic or higher order polynomial. If they are random, or have no trend, but "fan out" - they exhibit a phenomenon called
heteroscedasticity
In statistics, a sequence (or a vector) of random variables is homoscedastic () if all its random variables have the same finite variance. This is also known as homogeneity of variance. The complementary notion is called heteroscedasticity. The s ...
. If all of the residuals are equal, or do not fan out, they exhibit
homoscedasticity
In statistics, a sequence (or a vector) of random variables is homoscedastic () if all its random variables have the same finite variance. This is also known as homogeneity of variance. The complementary notion is called heteroscedasticity. The s ...
.
However, a terminological difference arises in the expression
mean squared error (MSE). The mean squared error of a regression is a number computed from the sum of squares of the computed ''residuals'', and not of the unobservable ''errors''. If that sum of squares is divided by ''n'', the number of observations, the result is the mean of the squared residuals. Since this is a
biased estimate of the variance of the unobserved errors, the bias is removed by dividing the sum of the squared residuals by ''df'' = ''n'' − ''p'' − 1, instead of ''n'', where ''df'' is the number of
degrees of freedom
Degrees of freedom (often abbreviated df or DOF) refers to the number of independent variables or parameters of a thermodynamic system. In various scientific fields, the word "freedom" is used to describe the limits to which physical movement or ...
(''n'' minus the number of parameters (excluding the intercept) p being estimated - 1). This forms an unbiased estimate of the variance of the unobserved errors, and is called the mean squared error.
Another method to calculate the mean square of error when analyzing the variance of linear regression using a technique like that used in
ANOVA (they are the same because ANOVA is a type of regression), the sum of squares of the residuals (aka sum of squares of the error) is divided by the degrees of freedom (where the degrees of freedom equal ''n'' − ''p'' − 1, where ''p'' is the number of parameters estimated in the model (one for each variable in the regression equation, not including the intercept)). One can then also calculate the mean square of the model by dividing the sum of squares of the model minus the degrees of freedom, which is just the number of parameters. Then the F value can be calculated by dividing the mean square of the model by the mean square of the error, and we can then determine significance (which is why you want the mean squares to begin with.).
However, because of the behavior of the process of regression, the ''distributions'' of residuals at different data points (of the input variable) may vary ''even if'' the errors themselves are identically distributed. Concretely, in a
linear regression
In statistics, linear regression is a linear approach for modelling the relationship between a scalar response and one or more explanatory variables (also known as dependent and independent variables). The case of one explanatory variable is call ...
where the errors are identically distributed, the variability of residuals of inputs in the middle of the domain will be ''higher'' than the variability of residuals at the ends of the domain:
linear regressions fit endpoints better than the middle. This is also reflected in the
influence functions of various data points on the
regression coefficients: endpoints have more influence.
Thus to compare residuals at different inputs, one needs to adjust the residuals by the expected variability of ''residuals,'' which is called
studentizing. This is particularly important in the case of detecting
outliers, where the case in question is somehow different than the other's in a dataset. For example, a large residual may be expected in the middle of the domain, but considered an outlier at the end of the domain.
Other uses of the word "error" in statistics
The use of the term "error" as discussed in the sections above is in the sense of a deviation of a value from a hypothetical unobserved value. At least two other uses also occur in statistics, both referring to observable prediction errors:
The ''
''mean squared error'' (MSE) refers to the amount by which the values predicted by an estimator differ from the quantities being estimated (typically outside the sample from which the model was estimated).
The ''
root mean square error
The root-mean-square deviation (RMSD) or root-mean-square error (RMSE) is a frequently used measure of the differences between values (sample or population values) predicted by a model or an estimator and the values observed. The RMSD represents ...
'' (RMSE) is the square-root of MSE.
The ''sum of squares of errors'' (SSE) is the MSE multiplied by the sample size.
''
Sum of squares of residuals'' (SSR) is the sum of the squares of the deviations of the actual values from the predicted values, within the sample used for estimation. This is the basis for the
least squares
The method of least squares is a standard approach in regression analysis to approximate the solution of overdetermined systems (sets of equations in which there are more equations than unknowns) by minimizing the sum of the squares of the res ...
estimate, where the regression coefficients are chosen such that the SSR is minimal (i.e. its derivative is zero).
Likewise, the ''
sum of absolute errors
In statistics, mean absolute error (MAE) is a measure of errors between paired observations expressing the same phenomenon. Examples of ''Y'' versus ''X'' include comparisons of predicted versus observed, subsequent time versus initial time, and ...
'' (SAE) is the sum of the absolute values of the residuals, which is minimized in the
least absolute deviations
Least absolute deviations (LAD), also known as least absolute errors (LAE), least absolute residuals (LAR), or least absolute values (LAV), is a statistical optimality criterion and a statistical optimization technique based minimizing the ''sum o ...
approach to regression.
The mean error (ME) is the
bias
Bias is a disproportionate weight ''in favor of'' or ''against'' an idea or thing, usually in a way that is closed-minded, prejudicial, or unfair. Biases can be innate or learned. People may develop biases for or against an individual, a group, ...
.
The ''mean residual'' (MR) is always zero for least-squares estimators.
See also
*
Absolute deviation
*
Consensus forecasts
*
Error detection and correction
In information theory and coding theory with applications in computer science and telecommunication, error detection and correction (EDAC) or error control are techniques that enable reliable delivery of digital data over unreliable communi ...
*
Explained sum of squares
*
Innovation (signal processing)
*
Lack-of-fit sum of squares In statistics, a sum of squares due to lack of fit, or more tersely a lack-of-fit sum of squares, is one of the components of a partition of the sum of squares of residuals in an analysis of variance, used in the numerator in an F-test of the null ...
*
Margin of error
*
Mean absolute error
*
Observational error
Observational error (or measurement error) is the difference between a measured value of a quantity and its true value.Dodge, Y. (2003) ''The Oxford Dictionary of Statistical Terms'', OUP. In statistics, an error is not necessarily a " mistake ...
*
Propagation of error
In statistics, propagation of uncertainty (or propagation of error) is the effect of variables' uncertainties (or errors, more specifically random errors) on the uncertainty of a function based on them. When the variables are the values of ex ...
*
Probable error In statistics, probable error defines the half-range of an interval about a central point for the distribution, such that half of the values from the distribution will lie within the interval and half outside.Dodge, Y. (2006) ''The Oxford Dictiona ...
*
Random and systematic errors
Observational error (or measurement error) is the difference between a measured value of a quantity and its true value.Dodge, Y. (2003) ''The Oxford Dictionary of Statistical Terms'', OUP. In statistics, an error is not necessarily a "mistake ...
*
Reduced chi-squared statistic
*
Regression dilution
Regression dilution, also known as regression attenuation, is the biasing of the linear regression slope towards zero (the underestimation of its absolute value), caused by errors in the independent variable.
Consider fitting a straight line f ...
*
Root mean square deviation
*
Sampling error
*
Standard error
*
Studentized residual
*
Type I and type II errors
In statistical hypothesis testing, a type I error is the mistaken rejection of an actually true null hypothesis (also known as a "false positive" finding or conclusion; example: "an innocent person is convicted"), while a type II error is the fa ...
References
*
*
*
*
External links
*
{{DEFAULTSORT:Errors And Residuals In Statistics
Statistical deviation and dispersion
Regression analysis