HOME

TheInfoList



OR:

Model selection is the task of selecting a
statistical model A statistical model is a mathematical model that embodies a set of statistical assumptions concerning the generation of Sample (statistics), sample data (and similar data from a larger Statistical population, population). A statistical model repres ...
from a set of candidate models, given data. In the simplest cases, a pre-existing set of data is considered. However, the task can also involve the design of experiments such that the data collected is well-suited to the problem of model selection. Given candidate models of similar predictive or explanatory power, the simplest model is most likely to be the best choice (
Occam's razor Occam's razor, Ockham's razor, or Ocham's razor ( la, novacula Occami), also known as the principle of parsimony or the law of parsimony ( la, lex parsimoniae), is the problem-solving principle that "entities should not be multiplied beyond neces ...
). state, "The majority of the problems in
statistical inference Statistical inference is the process of using data analysis to infer properties of an underlying probability distribution, distribution of probability.Upton, G., Cook, I. (2008) ''Oxford Dictionary of Statistics'', OUP. . Inferential statistical ...
can be considered to be problems related to statistical modeling". Relatedly, has said, "How hetranslation from subject-matter problem to statistical model is done is often the most critical part of an analysis". Model selection may also refer to the problem of selecting a few representative models from a large set of computational models for the purpose of
decision making In psychology, decision-making (also spelled decision making and decisionmaking) is regarded as the cognitive process resulting in the selection of a belief or a course of action among several possible alternative options. It could be either rati ...
or optimization under uncertainty.


Introduction

In its most basic forms, model selection is one of the fundamental tasks of scientific inquiry. Determining the principle that explains a series of observations is often linked directly to a mathematical model predicting those observations. For example, when
Galileo Galileo di Vincenzo Bonaiuti de' Galilei (15 February 1564 – 8 January 1642) was an Italian astronomer, physicist and engineer, sometimes described as a polymath. Commonly referred to as Galileo, his name was pronounced (, ). He was ...
performed his inclined plane experiments, he demonstrated that the motion of the balls fitted the parabola predicted by his model . Of the countless number of possible mechanisms and processes that could have produced the data, how can one even begin to choose the best model? The mathematical approach commonly taken decides among a set of candidate models; this set must be chosen by the researcher. Often simple models such as polynomials are used, at least initially . emphasize throughout their book the importance of choosing models based on sound scientific principles, such as understanding of the phenomenological processes or mechanisms (e.g., chemical reactions) underlying the data. Once the set of candidate models has been chosen, the statistical analysis allows us to select the best of these models. What is meant by ''best'' is controversial. A good model selection technique will balance goodness of fit with simplicity . More complex models will be better able to adapt their shape to fit the data (for example, a fifth-order polynomial can exactly fit six points), but the additional parameters may not represent anything useful. (Perhaps those six points are really just randomly distributed about a straight line.) Goodness of fit is generally determined using a likelihood ratio approach, or an approximation of this, leading to a chi-squared test. The complexity is generally measured by counting the number of parameters in the model. Model selection techniques can be considered as estimators of some physical quantity, such as the probability of the model producing the given data. The bias and variance are both important measures of the quality of this estimator;
efficiency Efficiency is the often measurable ability to avoid wasting materials, energy, efforts, money, and time in doing something or in producing a desired result. In a more general sense, it is the ability to do things well, successfully, and without ...
is also often considered. A standard example of model selection is that of curve fitting, where, given a set of points and other background knowledge (e.g. points are a result of i.i.d. samples), we must select a curve that describes the function that generated the points.


Two directions of model selection

There are two main objectives in inference and learning from data. One is for scientific discovery, also called statistical inference, understanding of the underlying data-generating mechanism and interpretation of the nature of the data. Another objective of learning from data is for predicting future or unseen observations, also called Statistical Prediction. In the second objective, the data scientist does not necessarily concern an accurate probabilistic description of the data. Of course, one may also be interested in both directions. In line with the two different objectives, model selection can also have two directions: model selection for inference and model selection for prediction. The first direction is to identify the best model for the data, which will preferably provide a reliable characterization of the sources of uncertainty for scientific interpretation. For this goal, it is significantly important that the selected model is not too sensitive to the sample size. Accordingly, an appropriate notion for evaluating model selection is the selection consistency, meaning that the most robust candidate will be consistently selected given sufficiently many data samples. The second direction is to choose a model as machinery to offer excellent predictive performance. For the latter, however, the selected model may simply be the lucky winner among a few close competitors, yet the predictive performance can still be the best possible. If so, the model selection is fine for the second goal (prediction), but the use of the selected model for insight and interpretation may be severely unreliable and misleading. Moreover, for very complex models selected this way, even predictions may be unreasonable for data only slightly different from those on which the selection was made.


Methods to assist in choosing the set of candidate models

* Data transformation (statistics) * Exploratory data analysis *
Model specification In statistics, model specification is part of the process of building a statistical model: specification consists of selecting an appropriate functional form for the model and choosing which variables to include. For example, given personal inco ...
* Scientific method


Criteria

Below is a list of criteria for model selection. The most commonly used criteria are (i) the Akaike information criterion and (ii) the Bayes factor and/or the Bayesian information criterion (which to some extent approximates the Bayes factor), see for a review. * Akaike information criterion (AIC), a measure of the goodness fit of an estimated statistical model * Bayes factor *
Bayesian information criterion In statistics, the Bayesian information criterion (BIC) or Schwarz information criterion (also SIC, SBC, SBIC) is a criterion for model selection among a finite set of models; models with lower BIC are generally preferred. It is based, in part, on ...
(BIC), also known as the Schwarz information criterion, a statistical criterion for model selection *Bridge criterion (BC), a statistical criterion that can attain the better performance of AIC and BIC despite the appropriateness of model specification. * Cross-validation * Deviance information criterion (DIC), another Bayesian oriented model selection criterion * False discovery rate *
Focused information criterion In statistics, the focused information criterion (FIC) is a method for selecting the most appropriate model among a set of competitors for a given data set. Unlike most other model selection strategies, like the Akaike information criterion (AIC), ...
(FIC), a selection criterion sorting statistical models by their effectiveness for a given focus parameter *
Hannan–Quinn information criterion In statistics, the Hannan–Quinn information criterion (HQC) is a criterion for model selection. It is an alternative to Akaike information criterion (AIC) and Bayesian information criterion (BIC). It is given as : \mathrm = -2 L_ + 2 k \ln(\ln ...
, an alternative to the Akaike and Bayesian criteria * Kashyap information criterion (KIC) is a powerful alternative to AIC and BIC, because KIC uses Fisher information matrix * Likelihood-ratio test * Mallows's ''Cp'' * Minimum description length * Minimum message length (MML) * PRESS statistic, also known as the PRESS criterion *
Structural risk minimization Structural risk minimization (SRM) is an inductive principle of use in machine learning. Commonly in machine learning, a generalized model must be selected from a finite data set, with the consequent problem of overfitting – the model becomin ...
*
Stepwise regression In statistics, stepwise regression is a method of fitting regression models in which the choice of predictive variables is carried out by an automatic procedure. In each step, a variable is considered for addition to or subtraction from the set of ...
* Watanabe–Akaike information criterion (WAIC), also called the widely applicable information criterion *
Extended Bayesian Information Criterion Extension, extend or extended may refer to: Mathematics Logic or set theory * Axiom of extensionality * Extensible cardinal * Extension (model theory) * Extension (predicate logic), the set of tuples of values that satisfy the predicate * Exte ...
(EBIC) is an extension of ordinary
Bayesian information criterion In statistics, the Bayesian information criterion (BIC) or Schwarz information criterion (also SIC, SBC, SBIC) is a criterion for model selection among a finite set of models; models with lower BIC are generally preferred. It is based, in part, on ...
(BIC) for models with high parameter spaces. * Extended Fisher Information Criterion (EFIC) is a model selection criterion for linear regression models. Among these criteria, cross-validation is typically the most accurate, and computationally the most expensive, for supervised learning problems. say the following:


See also

* All models are wrong * Analysis of competing hypotheses * Automated machine learning (AutoML) * Bias-variance dilemma * Feature selection * Freedman's paradox *
Grid search In machine learning, hyperparameter optimization or tuning is the problem of choosing a set of optimal hyperparameters for a learning algorithm. A hyperparameter is a parameter whose value is used to control the learning process. By contrast, the v ...
*
Identifiability Analysis Identifiability analysis is a group of methods found in mathematical statistics that are used to determine how well the parameters of a model are estimated by the quantity and quality of experimental data.Cobelli & DiStefano (1980) Therefore, thes ...
* Log-linear analysis *
Model identification In statistics, identifiability is a property which a model must satisfy for precise inference to be possible. A model is identifiable if it is theoretically possible to learn the true values of this model's underlying parameters after obtaining ...
*
Occam's razor Occam's razor, Ockham's razor, or Ocham's razor ( la, novacula Occami), also known as the principle of parsimony or the law of parsimony ( la, lex parsimoniae), is the problem-solving principle that "entities should not be multiplied beyond neces ...
* Optimal design * Parameter identification problem * Scientific modelling *
Statistical model validation In statistics, model validation is the task of evaluating whether a chosen statistical model is appropriate or not. Oftentimes in statistical inference, inferences from models that appear to fit their data may be flukes, resulting in a misunderstan ...
*
Stein's paradox In decision theory and estimation theory, Stein's example (also known as Stein's phenomenon or Stein's paradox) is the observation that when three or more parameters are estimated simultaneously, there exist combined estimators more accurate on av ...


Notes


References

* * * * * * his has over 38000 citations on Google Scholar">Google_Scholar.html" ;"title="his has over 38000 citations on Google Scholar">his has over 38000 citations on Google Scholar* (reprinted 1965, ''Science'' 148: 754–75

) * * * * * * * * * * * * * * * * * * * * * {{Least Squares and Regression Analysis Model selection, Regression variable selection Mathematical and quantitative methods (economics)