In
statistics, particularly
regression analysis
In statistical modeling, regression analysis is a set of statistical processes for estimating the relationships between a dependent variable (often called the 'outcome' or 'response' variable, or a 'label' in machine learning parlance) and one ...
, the Working–Hotelling procedure, named after
Holbrook Working
Holbrook Working (February 5, 1895 – October 5, 1985) was an American professor of economics and statistics at Stanford University's Food Research Institute known for his contributions on hedging, on the theory of futures prices, on an early ...
and
Harold Hotelling
Harold Hotelling (; September 29, 1895 – December 26, 1973) was an American mathematical statistician and an influential economic theorist, known for Hotelling's law, Hotelling's lemma, and Hotelling's rule in economics, as well as Hotelling's ...
, is a method of simultaneous estimation in
linear regression
In statistics, linear regression is a linear approach for modelling the relationship between a scalar response and one or more explanatory variables (also known as dependent and independent variables). The case of one explanatory variable is ...
models. One of the first developments in
simultaneous inference, it was devised by Working and Hotelling for the
simple linear regression
In statistics, simple linear regression is a linear regression model with a single explanatory variable. That is, it concerns two-dimensional sample points with one independent variable and one dependent variable (conventionally, the ''x'' an ...
model in 1929.
[Miller (1966), p. 1] It provides a
confidence region In statistics, a confidence region is a multi-dimensional generalization of a confidence interval. It is a set of points in an ''n''-dimensional space, often represented as an ellipsoid around a point which is an estimated solution to a problem, al ...
for multiple mean responses, that is, it gives the upper and lower bounds of more than one value of a
dependent variable
Dependent and independent variables are variables in mathematical modeling, statistical modeling and experimental sciences. Dependent variables receive this name because, in an experiment, their values are studied under the supposition or dema ...
at several levels of the
independent variable
Dependent and independent variables are variables in mathematical modeling, statistical modeling and experimental sciences. Dependent variables receive this name because, in an experiment, their values are studied under the supposition or deman ...
s at a certain
confidence level
In frequentist statistics, a confidence interval (CI) is a range of estimates for an unknown parameter. A confidence interval is computed at a designated ''confidence level''; the 95% confidence level is most common, but other levels, such as ...
. The resulting
confidence band
A confidence band is used in statistical analysis to represent the uncertainty in an estimate of a curve or function based on limited or noisy data. Similarly, a prediction band is used to represent the uncertainty about the value of a new data-po ...
s are known as the Working–Hotelling–Scheffé confidence bands.
Like the closely related
Scheffé's method In statistics, Scheffé's method, named after the American statistician Henry Scheffé, is a method for adjusting significance levels in a linear regression analysis to account for multiple comparisons. It is particularly useful in analysis of v ...
in the
analysis of variance
Analysis of variance (ANOVA) is a collection of statistical models and their associated estimation procedures (such as the "variation" among and between groups) used to analyze the differences among means. ANOVA was developed by the statistician ...
, which considers all possible
contrasts, the Working–Hotelling procedure considers all possible values of the independent variables; that is, in a particular regression model, the probability that all the Working–Hotelling confidence intervals cover the true value of the mean response is the
confidence coefficient
In frequentist statistics, a confidence interval (CI) is a range of estimates for an unknown parameter. A confidence interval is computed at a designated ''confidence level''; the 95% confidence level is most common, but other levels, such as ...
. As such, when only a small subset of the possible values of the independent variable is considered, it is more conservative and yields wider intervals than competitors like the
Bonferroni correction
In statistics, the Bonferroni correction is a method to counteract the multiple comparisons problem.
Background
The method is named for its use of the Bonferroni inequalities.
An extension of the method to confidence intervals was proposed by Oliv ...
at the same level of confidence. It outperforms the Bonferroni correction as more values are considered.
Statement
Simple linear regression
Consider a
simple linear regression
In statistics, simple linear regression is a linear regression model with a single explanatory variable. That is, it concerns two-dimensional sample points with one independent variable and one dependent variable (conventionally, the ''x'' an ...
model
, where
is the response variable and
the explanatory variable, and let
and
be the
least-squares
The method of least squares is a standard approach in regression analysis to approximate the solution of overdetermined systems (sets of equations in which there are more equations than unknowns) by minimizing the sum of the squares of the res ...
estimates of
and
respectively. Then the least-squares estimate of the mean response
at the level
is
. It can then be
shown, assuming that the errors independently and identically follow the
normal distribution
In statistics, a normal distribution or Gaussian distribution is a type of continuous probability distribution for a real-valued random variable. The general form of its probability density function is
:
f(x) = \frac e^
The parameter \mu i ...
, that an
confidence interval of the mean response at a certain level of
is as follows:
:
where
is the
mean squared error
In statistics, the mean squared error (MSE) or mean squared deviation (MSD) of an estimator (of a procedure for estimating an unobserved quantity) measures the average of the squares of the errors—that is, the average squared difference betwe ...
and
denotes the upper
percentile
In statistics, a ''k''-th percentile (percentile score or centile) is a score ''below which'' a given percentage ''k'' of scores in its frequency distribution falls (exclusive definition) or a score ''at or below which'' a given percentage falls ...
of
Student's t-distribution with
degrees of freedom
Degrees of freedom (often abbreviated df or DOF) refers to the number of independent variables or parameters of a thermodynamic system. In various scientific fields, the word "freedom" is used to describe the limits to which physical movement or ...
.
However, as multiple mean responses are estimated, the confidence level declines rapidly. To fix the confidence coefficient at
, the Working–Hotelling approach employs an F-statistic:
[Miller (2014)][Neter, Wasserman and Kutner, pp. 163–165]
:
where
and
denotes the upper
percentile of the
F-distribution
In probability theory and statistics, the ''F''-distribution or F-ratio, also known as Snedecor's ''F'' distribution or the Fisher–Snedecor distribution (after Ronald Fisher and George W. Snedecor) is a continuous probability distribution ...
with
degrees of freedom. The confidence level of is
over ''all'' values of
, i.e.
.
Multiple linear regression
The Working–Hotelling confidence bands can be easily generalised to multiple linear regression. Consider a general linear model as defined in the
linear regressions article, that is,
:
where
:
Again, it can be shown that the least-squares estimate of the mean response
is
, where
consists of least-square estimates of the entries in
, i.e.
. Likewise, it can be shown that a
confidence interval for a single mean response estimate is as follows:
:
where
is the observed value of the mean squared error
.
The Working–Hotelling approach to multiple estimations is similar to that of simple linear regression, with only a change in the degrees of freedom:
:
where
.
Graphical representation
In the simple linear regression case, Working–Hotelling–Scheffé
confidence band
A confidence band is used in statistical analysis to represent the uncertainty in an estimate of a curve or function based on limited or noisy data. Similarly, a prediction band is used to represent the uncertainty about the value of a new data-po ...
s, drawn by connecting the upper and lower limits of the mean response at every level, take the shape of
hyperbola
In mathematics, a hyperbola (; pl. hyperbolas or hyperbolae ; adj. hyperbolic ) is a type of smooth curve lying in a plane, defined by its geometric properties or by equations for which it is the solution set. A hyperbola has two pieces, c ...
s. In drawing, they are sometimes approximated by the Graybill–Bowden confidence bands, which are linear and hence easier to graph:
: