Likelihoodist Statistics
   HOME

TheInfoList



OR:

Likelihoodist statistics or likelihoodism is an approach to
statistics Statistics (from German language, German: ', "description of a State (polity), state, a country") is the discipline that concerns the collection, organization, analysis, interpretation, and presentation of data. In applying statistics to a s ...
that exclusively or primarily uses the
likelihood function A likelihood function (often simply called the likelihood) measures how well a statistical model explains observed data by calculating the probability of seeing that data under different parameter values of the model. It is constructed from the ...
. Likelihoodist statistics is a more minor school than the main approaches of
Bayesian statistics Bayesian statistics ( or ) is a theory in the field of statistics based on the Bayesian interpretation of probability, where probability expresses a ''degree of belief'' in an event. The degree of belief may be based on prior knowledge about ...
and
frequentist statistics Frequentist inference is a type of statistical inference based in frequentist probability, which treats “probability” in equivalent terms to “frequency” and draws conclusions from sample-data by means of emphasizing the frequency or pro ...
, but has some adherents and applications. The central idea of likelihoodism is the
likelihood principle In statistics, the likelihood principle is the proposition that, given a statistical model, all the evidence in a sample relevant to model parameters is contained in the likelihood function. A likelihood function arises from a probability densit ...
: data are interpreted as
evidence Evidence for a proposition is what supports the proposition. It is usually understood as an indication that the proposition is truth, true. The exact definition and role of evidence vary across different fields. In epistemology, evidence is what J ...
, and the strength of the evidence is measured by the likelihood function. Beyond this, there are significant differences within likelihood approaches: "orthodox" likelihoodists consider data ''only'' as evidence, and do not use it as the basis of
statistical inference Statistical inference is the process of using data analysis to infer properties of an underlying probability distribution.Upton, G., Cook, I. (2008) ''Oxford Dictionary of Statistics'', OUP. . Inferential statistical analysis infers properties of ...
, while others make inferences based on likelihood, but without using
Bayesian inference Bayesian inference ( or ) is a method of statistical inference in which Bayes' theorem is used to calculate a probability of a hypothesis, given prior evidence, and update it as more information becomes available. Fundamentally, Bayesian infer ...
or
frequentist inference Frequentist inference is a type of statistical inference based in frequentist probability, which treats “probability” in equivalent terms to “frequency” and draws conclusions from sample-data by means of emphasizing the frequency or pr ...
. Likelihoodism is thus criticized for either not providing a basis for belief or action (if it fails to make inferences), or not satisfying the requirements of these other schools. The likelihood function is also used in Bayesian statistics and frequentist statistics, but they differ in how it is used. Some likelihoodists consider their use of likelihood as an alternative to other approaches, while others consider it complementary and compatible with other approaches; see .


Relation with other theories

While likelihoodism is a distinct approach to statistical inference, it can be related to or contrasted with other theories and methodologies in statistics. Here are some notable connections: # Bayesian statistics: Bayesian statistics is an alternative approach to statistical inference that incorporates prior information and updates it using observed data to obtain posterior probabilities. Likelihoodism and Bayesian statistics are compatible in the sense that both methods utilize the likelihood function. However, they differ in their treatment of prior information. Bayesian statistics incorporates prior beliefs into the analysis explicitly, whereas likelihoodism focuses solely on the likelihood function without specifying a prior distribution. # Frequentist statistics: Frequentist statistics, also known as classical or frequentist inference, is another major framework for statistical analysis. Frequentist methods emphasize properties of repeated sampling and focus on concepts such as unbiasedness, consistency, and hypothesis testing. Likelihoodism can be seen as a departure from traditional frequentist methods, as it places the likelihood function at the core of statistical inference. Likelihood-based methods provide a bridge between the likelihoodist perspective and frequentist approaches by using likelihood ratios for hypothesis testing and constructing confidence intervals. # Fisherian statistics: Likelihoodism has deep connections to the statistical philosophy of
Ronald Fisher Sir Ronald Aylmer Fisher (17 February 1890 – 29 July 1962) was a British polymath who was active as a mathematician, statistician, biologist, geneticist, and academic. For his work in statistics, he has been described as "a genius who a ...
. Fisher introduced the concept of likelihood and its maximization as a criterion for estimating parameters. Fisher's approach emphasized the concept of sufficiency and the
maximum likelihood estimation In statistics, maximum likelihood estimation (MLE) is a method of estimation theory, estimating the Statistical parameter, parameters of an assumed probability distribution, given some observed data. This is achieved by Mathematical optimization, ...
(MLE). Likelihoodism can be seen as an extension of Fisherian statistics, refining and expanding the use of likelihood in statistical inference. #
Information theory Information theory is the mathematical study of the quantification (science), quantification, Data storage, storage, and telecommunications, communication of information. The field was established and formalized by Claude Shannon in the 1940s, ...
: Information theory, developed by Claude Shannon, provides a mathematical framework for quantifying information content and communication. The concept of entropy in information theory has connections to the likelihood function and the AIC criterion. AIC, which incorporates a penalty term for model complexity, can be viewed as an information-theoretic approach to model selection and balances model fit with model complexity. #
Decision theory Decision theory or the theory of rational choice is a branch of probability theory, probability, economics, and analytic philosophy that uses expected utility and probabilities, probability to model how individuals would behave Rationality, ratio ...
: Decision theory combines statistical inference with decision-making under uncertainty. It considers the trade-off between risks and potential losses in decision-making processes. Likelihoodism can be integrated with decision theory to make decisions based on the likelihood function, such as choosing the model with the highest likelihood or evaluating different decision options based on their associated likelihoods.


Criticism

While likelihood-based statistics have been widely used and have many advantages, they are not without criticism. Here are some common criticisms of likelihoodist statistics: # Model dependence: Likelihood-based inference heavily relies on the choice of a specific statistical model. If the chosen model does not accurately represent the true underlying data-generating process, the resulting estimates and inferences may be biased or misleading. Model misspecification can lead to incorrect conclusions, especially in complex real-world scenarios where the true model may be unknown or difficult to capture. # Difficulty of interpretability: Likelihood-based statistics focus on optimizing the likelihood function to estimate parameters, but they may not provide intuitive or easily interpretable estimates. The estimated parameters may not have a direct and meaningful interpretation in the context of the problem being studied. This can make it challenging for practitioners to communicate the results to non-technical audiences or make practical decisions based on the estimates. # Sensitivity to sample size: Likelihood-based methods can be sensitive to the sample size of the data. In situations with small sample sizes, the likelihood function can be highly variable, leading to unstable estimates. This instability can also affect the model selection process, as the likelihood ratio test or information criteria may not perform well when sample sizes are small. # Assumption of independence: Likelihood-based inference often assumes that the observed data are
independent and identically distributed Independent or Independents may refer to: Arts, entertainment, and media Artist groups * Independents (artist group), a group of modernist painters based in Pennsylvania, United States * Independentes (English: Independents), a Portuguese artist ...
(IID). However, in many real-world scenarios, data points may exhibit dependence or correlation. Ignoring this dependence can lead to biased estimates or inaccurate hypothesis testing. # Lack of robustness: Likelihood-based methods are not always robust to violations of model assumptions or
outliers In statistics, an outlier is a data point that differs significantly from other observations. An outlier may be due to a variability in the measurement, an indication of novel data, or it may be the result of experimental error; the latter ar ...
in the data. If the data deviate from the assumed distribution or if extreme observations are present, the estimates can be heavily influenced by these outliers, leading to unreliable results. # Computational complexity: Estimating parameters based on likelihood functions can be computationally intensive, especially for complex models, large datasets, or highly non-linear systems. Optimization algorithms used to maximize the likelihood function may require substantial computational resources or may not converge to the global maximum, leading to suboptimal estimates. # Lack of uncertainty quantification: Likelihood-based inference often provides point estimates of parameters without explicit quantification of uncertainty. While techniques such as confidence intervals or standard errors can be used to approximate uncertainty, they rely on assumptions that may not always hold. Bayesian methods, on the other hand, provide a more formal and coherent framework for uncertainty quantification.


History

Likelihoodism as a distinct school dates to , which gives a systematic treatment of statistics, based on likelihood. This built on significant earlier work; see for a contemporary review. While comparing ratios of probabilities dates to early statistics and probability, notably
Bayesian inference Bayesian inference ( or ) is a method of statistical inference in which Bayes' theorem is used to calculate a probability of a hypothesis, given prior evidence, and update it as more information becomes available. Fundamentally, Bayesian infer ...
as developed by
Pierre-Simon Laplace Pierre-Simon, Marquis de Laplace (; ; 23 March 1749 – 5 March 1827) was a French polymath, a scholar whose work has been instrumental in the fields of physics, astronomy, mathematics, engineering, statistics, and philosophy. He summariz ...
from the late 1700s,
likelihood A likelihood function (often simply called the likelihood) measures how well a statistical model explains observed data by calculating the probability of seeing that data under different parameter values of the model. It is constructed from the j ...
as a distinct concept is due to Ronald Fisher in . Likelihood played an important role in Fisher's statistics, but he developed and used many non-likelihood frequentist techniques as well. His late writings, notably , emphasize likelihood more strongly, and can be considered a precursor to a systematic theory of likelihoodism. The
likelihood principle In statistics, the likelihood principle is the proposition that, given a statistical model, all the evidence in a sample relevant to model parameters is contained in the likelihood function. A likelihood function arises from a probability densit ...
was proposed in 1962 by several authors, notably , , and , and followed by the law of likelihood in ; these laid the foundation for likelihoodism. See for early history. While Edwards's version of likelihoodism considered likelihood as only evidence, which was followed by , others proposed inference based only on likelihood, notably as extensions of maximum likelihood estimation. Notable is
John Nelder John Ashworth Nelder (8 October 1924 – 7 August 2010) was a British statistician known for his contributions to experimental design, analysis of variance, computational statistics, and statistical theory. Contributions Nelder's work was infl ...
, who declared in : Textbooks that take a likelihoodist approach include the following: , , , , and . A collection of relevant papers is given by .


See also

*
Akaike information criterion The Akaike information criterion (AIC) is an estimator of prediction error and thereby relative quality of statistical models for a given set of data. Given a collection of models for the data, AIC estimates the quality of each model, relative to ...
*
Foundations of statistics The Foundations of Statistics are the mathematical and philosophical bases for statistical methods. These bases are the theoretical frameworks that ground and justify methods of statistical inference, estimation, hypothesis testing, uncertainty ...
*
Likelihood ratio test In statistics, the likelihood-ratio test is a hypothesis test that involves comparing the goodness of fit of two competing statistical models, typically one found by maximization over the entire parameter space and another found after imposing ...


References

* * * ''(With discussion.)'' * * * * * * * * * * * * * *


Further reading

*


External links

* {{cite web , url=https://plato.stanford.edu/entries/logic-inductive/sup-likelihood.html , title=Likelihood Ratios, Likelihoodism, and the Law of Likelihood , work=Stanford Encyclopedia of Philosophy , access-date=2019-03-14