In
statistics, the score test assesses
constraints on
statistical parameters based on the
gradient
In vector calculus, the gradient of a scalar-valued differentiable function of several variables is the vector field (or vector-valued function) \nabla f whose value at a point p is the "direction and rate of fastest increase". If the gr ...
of the
likelihood function—known as the ''
score
Score or scorer may refer to:
*Test score, the result of an exam or test
Business
* Score Digital, now part of Bauer Radio
* Score Entertainment, a former American trading card design and manufacturing company
* Score Media, a former Canadian m ...
''—evaluated at the hypothesized parameter value under the
null hypothesis
In scientific research, the null hypothesis (often denoted ''H''0) is the claim that no difference or relationship exists between two sets of data or variables being analyzed. The null hypothesis is that any experimentally observed difference is d ...
. Intuitively, if the restricted estimator is near the
maximum of the likelihood function, the score should not differ from zero by more than
sampling error
In statistics, sampling errors are incurred when the statistical characteristics of a population are estimated from a subset, or sample, of that population. Since the sample does not include all members of the population, statistics of the sample ...
. While the
finite sample distributions of score tests are generally unknown, they have an asymptotic
χ2-distribution under the null hypothesis as first proved by
C. R. Rao in 1948, a fact that can be used to determine
statistical significance
In statistical hypothesis testing, a result has statistical significance when it is very unlikely to have occurred given the null hypothesis (simply by chance alone). More precisely, a study's defined significance level, denoted by \alpha, is the p ...
.
Since function maximization subject to equality constraints is most conveniently done using a Lagrangean expression of the problem, the score test can be equivalently understood as a test of the
magnitude of the
Lagrange multipliers associated with the constraints where, again, if the constraints are non-binding at the maximum likelihood, the vector of Lagrange multipliers should not differ from zero by more than sampling error. The equivalence of these two approaches was first shown by
S. D. Silvey
Samuel David Silvey was a British statistician. Among his contributions are the Lagrange multiplier test, and the use of eigenvalues of the moment matrix for the detection of multicollinearity
In statistics, multicollinearity (also collinearity ...
in 1959, which led to the name Lagrange multiplier test that has become more commonly used, particularly in econometrics, since
Breusch and
Pagan's much-cited 1980 paper.
The main advantage of the score test over the
Wald test and
likelihood-ratio test is that the score test only requires the computation of the restricted estimator. This makes testing feasible when the unconstrained maximum likelihood estimate is a
boundary point in the
parameter space. Further, because the score test only requires the estimation of the likelihood function under the null hypothesis, it is less specific than the likelihood ratio test about the alternative hypothesis.
Single-parameter test
The statistic
Let
be the
likelihood function which depends on a univariate parameter
and let
be the data. The score
is defined as
:
The
Fisher information is
:
where ƒ is the probability density.
The statistic to test
is
which has an
asymptotic distribution of
, when
is true. While asymptotically identical, calculating the LM statistic using the
outer-gradient-product estimator of the Fisher information matrix can lead to bias in small samples.
Note on notation
Note that some texts use an alternative notation, in which the statistic
is tested against a normal distribution. This approach is equivalent and gives identical results.
As most powerful test for small deviations
:
where
is the
likelihood function,
is the value of the parameter of interest under the null hypothesis, and
is a constant set depending on the size of the test desired (i.e. the probability of rejecting
if
is true; see
Type I error
In statistical hypothesis testing, a type I error is the mistaken rejection of an actually true null hypothesis (also known as a "false positive" finding or conclusion; example: "an innocent person is convicted"), while a type II error is the f ...
).
The score test is the most powerful test for small deviations from
. To see this, consider testing
versus
. By the
Neyman–Pearson lemma, the most powerful test has the form
:
Taking the log of both sides yields
:
The score test follows making the substitution (by
Taylor series
In mathematics, the Taylor series or Taylor expansion of a function is an infinite sum of terms that are expressed in terms of the function's derivatives at a single point. For most common functions, the function and the sum of its Taylor se ...
expansion)
:
and identifying the
above with
.
Relationship with other hypothesis tests
If the null hypothesis is true, the
likelihood ratio test, the
Wald test, and the Score test are asymptotically equivalent tests of hypotheses. When testing
nested models, the statistics for each test then converge to a Chi-squared distribution with degrees of freedom equal to the difference in degrees of freedom in the two models. If the null hypothesis is not true, however, the statistics converge to a noncentral chi-squared distribution with possibly different noncentrality parameters.
Multiple parameters
A more general score test can be derived when there is more than one parameter. Suppose that
is the
maximum likelihood
In statistics, maximum likelihood estimation (MLE) is a method of estimating the parameters of an assumed probability distribution, given some observed data. This is achieved by maximizing a likelihood function so that, under the assumed sta ...
estimate of
under the null hypothesis
while
and
are respectively, the score and the Fisher information matrices under the alternative hypothesis. Then
:
asymptotically under
, where
is the number of constraints imposed by the null hypothesis and
:
and
:
This can be used to test
.
The actual formula for the test statistic depends on which estimator of the Fisher information matrix is being used.
Special cases
In many situations, the score statistic reduces to another commonly used statistic.
In
linear regression
In statistics, linear regression is a linear approach for modelling the relationship between a scalar response and one or more explanatory variables (also known as dependent and independent variables). The case of one explanatory variable is ...
, the Lagrange multiplier test can be expressed as a function of the
''F''-test.
When the data follows a normal distribution, the score statistic is the same as the
t statistic.
When the data consists of binary observations, the score statistic is the same as the chi-squared statistic in the
Pearson's chi-squared test.
See also
*
Fisher information
*
Uniformly most powerful test
*
Score (statistics)
*
Sup-LM test
In econometrics and statistics, a structural break is an unexpected change over time in the parameters of regression models, which can lead to huge forecasting errors and unreliability of the model in general. This issue was popularised by David ...
References
Further reading
*
*
*
{{DEFAULTSORT:Score Test
Statistical tests