HOME

TheInfoList



OR:

In statistics, the score test assesses constraints on statistical parameters based on the
gradient In vector calculus, the gradient of a scalar-valued differentiable function of several variables is the vector field (or vector-valued function) \nabla f whose value at a point p is the "direction and rate of fastest increase". If the gr ...
of the likelihood function—known as the ''
score Score or scorer may refer to: *Test score, the result of an exam or test Business * Score Digital, now part of Bauer Radio * Score Entertainment, a former American trading card design and manufacturing company * Score Media, a former Canadian m ...
''—evaluated at the hypothesized parameter value under the
null hypothesis In scientific research, the null hypothesis (often denoted ''H''0) is the claim that no difference or relationship exists between two sets of data or variables being analyzed. The null hypothesis is that any experimentally observed difference is d ...
. Intuitively, if the restricted estimator is near the maximum of the likelihood function, the score should not differ from zero by more than
sampling error In statistics, sampling errors are incurred when the statistical characteristics of a population are estimated from a subset, or sample, of that population. Since the sample does not include all members of the population, statistics of the sample ...
. While the finite sample distributions of score tests are generally unknown, they have an asymptotic χ2-distribution under the null hypothesis as first proved by C. R. Rao in 1948, a fact that can be used to determine
statistical significance In statistical hypothesis testing, a result has statistical significance when it is very unlikely to have occurred given the null hypothesis (simply by chance alone). More precisely, a study's defined significance level, denoted by \alpha, is the p ...
. Since function maximization subject to equality constraints is most conveniently done using a Lagrangean expression of the problem, the score test can be equivalently understood as a test of the magnitude of the Lagrange multipliers associated with the constraints where, again, if the constraints are non-binding at the maximum likelihood, the vector of Lagrange multipliers should not differ from zero by more than sampling error. The equivalence of these two approaches was first shown by
S. D. Silvey Samuel David Silvey was a British statistician. Among his contributions are the Lagrange multiplier test, and the use of eigenvalues of the moment matrix for the detection of multicollinearity In statistics, multicollinearity (also collinearity ...
in 1959, which led to the name Lagrange multiplier test that has become more commonly used, particularly in econometrics, since Breusch and Pagan's much-cited 1980 paper. The main advantage of the score test over the Wald test and likelihood-ratio test is that the score test only requires the computation of the restricted estimator. This makes testing feasible when the unconstrained maximum likelihood estimate is a boundary point in the parameter space. Further, because the score test only requires the estimation of the likelihood function under the null hypothesis, it is less specific than the likelihood ratio test about the alternative hypothesis.


Single-parameter test


The statistic

Let L be the likelihood function which depends on a univariate parameter \theta and let x be the data. The score U(\theta) is defined as : U(\theta)=\frac. The Fisher information is : I(\theta) = - \operatorname \left \,\theta \right,, where ƒ is the probability density. The statistic to test \mathcal_0:\theta=\theta_0 is S(\theta_0) = \frac which has an asymptotic distribution of \chi^2_1, when \mathcal_0 is true. While asymptotically identical, calculating the LM statistic using the outer-gradient-product estimator of the Fisher information matrix can lead to bias in small samples.


Note on notation

Note that some texts use an alternative notation, in which the statistic S^*(\theta)=\sqrt is tested against a normal distribution. This approach is equivalent and gives identical results.


As most powerful test for small deviations

: \left(\frac\right)_ \geq C where L is the likelihood function, \theta_0 is the value of the parameter of interest under the null hypothesis, and C is a constant set depending on the size of the test desired (i.e. the probability of rejecting H_0 if H_0 is true; see
Type I error In statistical hypothesis testing, a type I error is the mistaken rejection of an actually true null hypothesis (also known as a "false positive" finding or conclusion; example: "an innocent person is convicted"), while a type II error is the f ...
). The score test is the most powerful test for small deviations from H_0. To see this, consider testing \theta=\theta_0 versus \theta=\theta_0+h. By the Neyman–Pearson lemma, the most powerful test has the form : \frac \geq K; Taking the log of both sides yields : \log L(\theta_0 + h \mid x ) - \log L(\theta_0\mid x) \geq \log K. The score test follows making the substitution (by
Taylor series In mathematics, the Taylor series or Taylor expansion of a function is an infinite sum of terms that are expressed in terms of the function's derivatives at a single point. For most common functions, the function and the sum of its Taylor se ...
expansion) : \log L(\theta_0+h\mid x) \approx \log L(\theta_0\mid x) + h\times \left(\frac\right)_ and identifying the C above with \log(K).


Relationship with other hypothesis tests

If the null hypothesis is true, the likelihood ratio test, the Wald test, and the Score test are asymptotically equivalent tests of hypotheses. When testing nested models, the statistics for each test then converge to a Chi-squared distribution with degrees of freedom equal to the difference in degrees of freedom in the two models. If the null hypothesis is not true, however, the statistics converge to a noncentral chi-squared distribution with possibly different noncentrality parameters.


Multiple parameters

A more general score test can be derived when there is more than one parameter. Suppose that \widehat_0 is the
maximum likelihood In statistics, maximum likelihood estimation (MLE) is a method of estimating the parameters of an assumed probability distribution, given some observed data. This is achieved by maximizing a likelihood function so that, under the assumed sta ...
estimate of \theta under the null hypothesis H_0 while U and I are respectively, the score and the Fisher information matrices under the alternative hypothesis. Then : U^T(\widehat_0) I^(\widehat_0) U(\widehat_0) \sim \chi^2_k asymptotically under H_0, where k is the number of constraints imposed by the null hypothesis and : U(\widehat_0) = \frac and : I(\widehat_0) = -\operatorname E\left(\frac \right). This can be used to test H_0. The actual formula for the test statistic depends on which estimator of the Fisher information matrix is being used.


Special cases

In many situations, the score statistic reduces to another commonly used statistic. In
linear regression In statistics, linear regression is a linear approach for modelling the relationship between a scalar response and one or more explanatory variables (also known as dependent and independent variables). The case of one explanatory variable is ...
, the Lagrange multiplier test can be expressed as a function of the ''F''-test. When the data follows a normal distribution, the score statistic is the same as the t statistic. When the data consists of binary observations, the score statistic is the same as the chi-squared statistic in the Pearson's chi-squared test.


See also

* Fisher information * Uniformly most powerful test * Score (statistics) *
Sup-LM test In econometrics and statistics, a structural break is an unexpected change over time in the parameters of regression models, which can lead to huge forecasting errors and unreliability of the model in general. This issue was popularised by David ...


References


Further reading

* * * {{DEFAULTSORT:Score Test Statistical tests