In
statistics
Statistics (from German language, German: ', "description of a State (polity), state, a country") is the discipline that concerns the collection, organization, analysis, interpretation, and presentation of data. In applying statistics to a s ...
, the score test assesses
constraints on
statistical parameter
In statistics, as opposed to its general use in mathematics, a parameter is any quantity of a statistical population that summarizes or describes an aspect of the population, such as a mean or a standard deviation. If a population exactly follo ...
s based on the
gradient
In vector calculus, the gradient of a scalar-valued differentiable function f of several variables is the vector field (or vector-valued function) \nabla f whose value at a point p gives the direction and the rate of fastest increase. The g ...
of the
likelihood function
A likelihood function (often simply called the likelihood) measures how well a statistical model explains observed data by calculating the probability of seeing that data under different parameter values of the model. It is constructed from the ...
—known as the ''
score SCORE may refer to:
*SCORE (software), a music scorewriter program
* SCORE (television), a weekend sports service of the defunct Financial News Network
*SCORE! Educational Centers
*SCORE International, an offroad racing organization
*Sarawak Corrido ...
''—evaluated at the hypothesized parameter value under the
null hypothesis
The null hypothesis (often denoted ''H''0) is the claim in scientific research that the effect being studied does not exist. The null hypothesis can also be described as the hypothesis in which no relationship exists between two sets of data o ...
. Intuitively, if the restricted estimator is near the
maximum
In mathematical analysis, the maximum and minimum of a function (mathematics), function are, respectively, the greatest and least value taken by the function. Known generically as extremum, they may be defined either within a given Interval (ma ...
of the likelihood function, the score should not differ from zero by more than
sampling error
In statistics, sampling errors are incurred when the statistical characteristics of a population are estimated from a subset, or sample, of that population. Since the sample does not include all members of the population, statistics of the sample ...
. While the
finite sample distributions of score tests are generally unknown, they have an asymptotic
χ2-distribution under the null hypothesis as first proved by
C. R. Rao in 1948, a fact that can be used to determine
statistical significance
In statistical hypothesis testing, a result has statistical significance when a result at least as "extreme" would be very infrequent if the null hypothesis were true. More precisely, a study's defined significance level, denoted by \alpha, is the ...
.
Since function maximization subject to equality constraints is most conveniently done using a Lagrangean expression of the problem, the score test can be equivalently understood as a test of the
magnitude
Magnitude may refer to:
Mathematics
*Euclidean vector, a quantity defined by both its magnitude and its direction
*Magnitude (mathematics), the relative size of an object
*Norm (mathematics), a term for the size or length of a vector
*Order of ...
of the
Lagrange multiplier
In mathematical optimization, the method of Lagrange multipliers is a strategy for finding the local maxima and minima of a function (mathematics), function subject to constraint (mathematics), equation constraints (i.e., subject to the conditio ...
s associated with the constraints where, again, if the constraints are non-binding at the maximum likelihood, the vector of Lagrange multipliers should not differ from zero by more than sampling error. The equivalence of these two approaches was first shown by
S. D. Silvey in 1959, which led to the name Lagrange Multiplier (LM) test that has become more commonly used, particularly in econometrics, since
Breusch and
Pagan
Paganism (, later 'civilian') is a term first used in the fourth century by early Christians for people in the Roman Empire who practiced polytheism, or ethnic religions other than Christianity, Judaism, and Samaritanism. In the time of the ...
's much-cited 1980 paper.
The main advantage of the score test over the
Wald test
In statistics, the Wald test (named after Abraham Wald) assesses constraints on statistical parameters based on the weighted distance between the unrestricted estimate and its hypothesized value under the null hypothesis, where the weight is the ...
and
likelihood-ratio test
In statistics, the likelihood-ratio test is a hypothesis test that involves comparing the goodness of fit of two competing statistical models, typically one found by maximization over the entire parameter space and another found after imposing ...
is that the score test only requires the computation of the restricted estimator. This makes testing feasible when the unconstrained maximum likelihood estimate is a
boundary point in the
parameter space The parameter space is the space of all possible parameter values that define a particular mathematical model. It is also sometimes called weight space, and is often a subset of finite-dimensional Euclidean space.
In statistics, parameter spaces a ...
. Further, because the score test only requires the estimation of the likelihood function under the null hypothesis, it is less specific than the likelihood ratio test about the alternative hypothesis.
Single-parameter test
The statistic
Let
be the
likelihood function
A likelihood function (often simply called the likelihood) measures how well a statistical model explains observed data by calculating the probability of seeing that data under different parameter values of the model. It is constructed from the ...
which depends on a univariate parameter
and let
be the data. The score
is defined as
:
The
Fisher information
In mathematical statistics, the Fisher information is a way of measuring the amount of information that an observable random variable ''X'' carries about an unknown parameter ''θ'' of a distribution that models ''X''. Formally, it is the variance ...
is
:
where ƒ is the probability density.
The statistic to test
is
which has an
asymptotic distribution of
, when
is true. While asymptotically identical, calculating the LM statistic using the
outer-gradient-product estimator of the Fisher information matrix can lead to bias in small samples.
Note on notation
Note that some texts use an alternative notation, in which the statistic
is tested against a normal distribution. This approach is equivalent and gives identical results.
As most powerful test for small deviations
:
where
is the
likelihood function
A likelihood function (often simply called the likelihood) measures how well a statistical model explains observed data by calculating the probability of seeing that data under different parameter values of the model. It is constructed from the ...
,
is the value of the parameter of interest under the null hypothesis, and
is a constant set depending on the size of the test desired (i.e. the probability of rejecting
if
is true; see
Type I error
Type I error, or a false positive, is the erroneous rejection of a true null hypothesis in statistical hypothesis testing. A type II error, or a false negative, is the erroneous failure in bringing about appropriate rejection of a false null hy ...
).
The score test is the most powerful test for small deviations from
. To see this, consider testing
versus
. By the
Neyman–Pearson lemma
In statistics, the Neyman–Pearson lemma describes the existence and uniqueness of the likelihood ratio as a uniformly most powerful test in certain contexts. It was introduced by Jerzy Neyman and Egon Pearson in a paper in 1933. The Neyman–Pea ...
, the most powerful test has the form
:
Taking the log of both sides yields
:
The score test follows making the substitution (by
Taylor series
In mathematics, the Taylor series or Taylor expansion of a function is an infinite sum of terms that are expressed in terms of the function's derivatives at a single point. For most common functions, the function and the sum of its Taylor ser ...
expansion)
:
and identifying the
above with
.
Relationship with other hypothesis tests
If the null hypothesis is true, the
likelihood ratio test
In statistics, the likelihood-ratio test is a hypothesis test that involves comparing the goodness of fit of two competing statistical models, typically one found by maximization over the entire parameter space and another found after imposing ...
, the
Wald test
In statistics, the Wald test (named after Abraham Wald) assesses constraints on statistical parameters based on the weighted distance between the unrestricted estimate and its hypothesized value under the null hypothesis, where the weight is the ...
, and the Score test are asymptotically equivalent tests of hypotheses. When testing
nested models, the statistics for each test then converge to a Chi-squared distribution with degrees of freedom equal to the difference in degrees of freedom in the two models. If the null hypothesis is not true, however, the statistics converge to a noncentral chi-squared distribution with possibly different noncentrality parameters.
Multiple parameters
A more general score test can be derived when there is more than one parameter. Suppose that
is the
maximum likelihood
In statistics, maximum likelihood estimation (MLE) is a method of estimating the parameters of an assumed probability distribution, given some observed data. This is achieved by maximizing a likelihood function so that, under the assumed stati ...
estimate of
under the null hypothesis
while
and
are respectively, the score vector and the Fisher information matrix. Then
:
asymptotically under
, where
is the number of constraints imposed by the null hypothesis and
:
and
:
This can be used to test
.
The actual formula for the test statistic depends on which estimator of the Fisher information matrix is being used.
Special cases
In many situations, the score statistic reduces to another commonly used statistic.
In
linear regression
In statistics, linear regression is a statistical model, model that estimates the relationship between a Scalar (mathematics), scalar response (dependent variable) and one or more explanatory variables (regressor or independent variable). A mode ...
, the Lagrange multiplier test can be expressed as a function of the
''F''-test.
When the data follows a normal distribution, the score statistic is the same as the
t statistic.
When the data consists of binary observations, the score statistic is the same as the chi-squared statistic in the
Pearson's chi-squared test
Pearson's chi-squared test or Pearson's \chi^2 test is a statistical test applied to sets of categorical data to evaluate how likely it is that any observed difference between the sets arose by chance. It is the most widely used of many chi-squa ...
.
See also
*
Fisher information
In mathematical statistics, the Fisher information is a way of measuring the amount of information that an observable random variable ''X'' carries about an unknown parameter ''θ'' of a distribution that models ''X''. Formally, it is the variance ...
*
Uniformly most powerful test
*
Score (statistics)
In statistics, the score (or informant) is the gradient of the log-likelihood function with respect to the statistical parameter, parameter vector. Evaluated at a particular value of the parameter vector, the score indicates the steepness of th ...
*
Sup-LM test
References
Further reading
*
*
*
*
{{DEFAULTSORT:Score Test
Statistical tests