Likelihood interval
   HOME

TheInfoList



OR:

In statistics, suppose that we have been given some data, and we are selecting a statistical model for that data. The relative likelihood compares the relative plausibilities of different candidate models or of different values of a parameter of a single model.


Relative likelihood of parameter values

Assume that we are given some data for which we have a statistical model with parameter . Suppose that the
maximum likelihood estimate In statistics, maximum likelihood estimation (MLE) is a method of estimating the parameters of an assumed probability distribution, given some observed data. This is achieved by maximizing a likelihood function so that, under the assumed statist ...
for is \hat. Relative plausibilities of other values may be found by comparing the likelihoods of those other values with the likelihood of \hat. The ''relative likelihood'' of is defined to be :\frac where \mathcal(\theta \mid x) denotes the likelihood function. Thus, the relative likelihood is the
likelihood ratio The likelihood function (often simply called the likelihood) represents the probability of random variable realizations conditional on particular values of the statistical parameters. Thus, when evaluated on a given sample, the likelihood functi ...
with fixed denominator \mathcal(\hat \mid x). The function :\theta \mapsto \frac is the ''relative likelihood function''.


Likelihood region

A ''likelihood region'' is the set of all values of whose relative likelihood is greater than or equal to a given threshold. In terms of percentages, a ''% likelihood region'' for is defined to be. : \left\. If is a single real parameter, a % likelihood region will usually comprise an interval of real values. If the region does comprise an interval, then it is called a ''likelihood interval''. Likelihood intervals, and more generally likelihood regions, are used for
interval estimation In statistics, interval estimation is the use of sample data to estimate an '' interval'' of plausible values of a parameter of interest. This is in contrast to point estimation, which gives a single value. The most prevalent forms of interval e ...
within likelihood-based statistics ("likelihoodist" statistics): They are similar to confidence intervals in frequentist statistics and
credible interval In Bayesian statistics, a credible interval is an interval within which an unobserved parameter value falls with a particular probability. It is an interval in the domain of a posterior probability distribution or a predictive distribution. T ...
s in Bayesian statistics. Likelihood intervals are interpreted directly in terms of relative likelihood, not in terms of
coverage probability In statistics, the coverage probability is a technique for calculating a confidence interval which is the proportion of the time that the interval contains the true value of interest. For example, suppose our interest is in the mean number of mon ...
(frequentism) or
posterior probability The posterior probability is a type of conditional probability that results from updating the prior probability with information summarized by the likelihood via an application of Bayes' rule. From an epistemological perspective, the posterior ...
(Bayesianism). Given a model, likelihood intervals can be compared to confidence intervals. If is a single real parameter, then under certain conditions, a 14.65% likelihood interval (about 1:7 likelihood) for will be the same as a 95% confidence interval (19/20 coverage probability). In a slightly different formulation suited to the use of log-likelihoods (see
Wilks' theorem In statistics Wilks' theorem offers an asymptotic distribution of the log-likelihood ratio statistic, which can be used to produce confidence intervals for maximum-likelihood estimates or as a test statistic for performing the likelihood-ratio te ...
), the test statistic is twice the difference in log-likelihoods and the probability distribution of the test statistic is approximately a
chi-squared distribution In probability theory and statistics, the chi-squared distribution (also chi-square or \chi^2-distribution) with k degrees of freedom is the distribution of a sum of the squares of k independent standard normal random variables. The chi-squar ...
with degrees-of-freedom (df) equal to the difference in df-s between the two models (therefore, the −2 likelihood interval is the same as the 0.954 confidence interval; assuming difference in df-s to be 1).


Relative likelihood of models

The definition of relative likelihood can be generalized to compare different statistical models. This generalization is based on AIC (Akaike information criterion), or sometimes AICc (Akaike Information Criterion with correction). Suppose that for some given data we have two statistical models, and . Also suppose that . Then the ''relative likelihood'' of with respect to is defined as follows. :: \exp \left( \frac \right) To see that this is a generalization of the earlier definition, suppose that we have some model with a (possibly multivariate) parameter . Then for any , set , and also set \hat\theta. The general definition now gives the same result as the earlier definition.


See also

*
Likelihood function The likelihood function (often simply called the likelihood) represents the probability of random variable realizations conditional on particular values of the statistical parameters. Thus, when evaluated on a given sample, the likelihood funct ...
* Statistical model selection * Statistical model specification * Statistical model validation


Notes

{{reflist Likelihood Statistical models