Bayesian information criterion
   HOME

TheInfoList



OR:

In
statistics Statistics (from German: '' Statistik'', "description of a state, a country") is the discipline that concerns the collection, organization, analysis, interpretation, and presentation of data. In applying statistics to a scientific, indust ...
, the Bayesian information criterion (BIC) or Schwarz information criterion (also SIC, SBC, SBIC) is a criterion for model selection among a finite set of models; models with lower BIC are generally preferred. It is based, in part, on the
likelihood function The likelihood function (often simply called the likelihood) represents the probability of random variable realizations conditional on particular values of the statistical parameters. Thus, when evaluated on a given sample, the likelihood functi ...
and it is closely related to the Akaike information criterion (AIC). When fitting models, it is possible to increase the likelihood by adding parameters, but doing so may result in overfitting. Both BIC and AIC attempt to resolve this problem by introducing a penalty term for the number of parameters in the model; the penalty term is larger in BIC than in AIC for sample sizes greater than 7. The BIC was developed by Gideon E. Schwarz and published in a 1978 paper, where he gave a Bayesian argument for adopting it.


Definition

The BIC is formally defined as : \mathrm = k\ln(n) - 2\ln(\widehat L). \ where *\hat L = the maximized value of the
likelihood function The likelihood function (often simply called the likelihood) represents the probability of random variable realizations conditional on particular values of the statistical parameters. Thus, when evaluated on a given sample, the likelihood functi ...
of the model M, i.e. \hat L=p(x\mid\widehat\theta,M), where \widehat\theta are the parameter values that maximize the likelihood function; *x = the observed data; *n = the number of data points in x, the number of observations, or equivalently, the sample size; *k = the number of
parameter A parameter (), generally, is any characteristic that can help in defining or classifying a particular system (meaning an event, project, object, situation, etc.). That is, a parameter is an element of a system that is useful, or critical, when ...
s estimated by the model. For example, in multiple linear regression, the estimated parameters are the intercept, the q slope parameters, and the constant variance of the errors; thus, k = q + 2.


Derivation

Konishi and Kitagawa derive the BIC to approximate the distribution of the data, integrating out the parameters using
Laplace's method In mathematics, Laplace's method, named after Pierre-Simon Laplace, is a technique used to approximate integrals of the form :\int_a^b e^ \, dx, where f(x) is a twice- differentiable function, ''M'' is a large number, and the endpoints ''a'' ...
, starting with the following model evidence: :p(x\mid M) = \int p(x\mid\theta,M) \pi(\theta\mid M) \, d\theta where \pi(\theta\mid M) is the prior for \theta under model M. The log-likelihood, \ln(p(x, \theta,M)), is then expanded to a second order Taylor series about the MLE, \widehat\theta, assuming it is twice differentiable as follows: :\ln(p(x\mid\theta,M)) = \ln(\widehat L) - \frac (\theta - \widehat\theta)^ \mathcal(\theta) (\theta - \widehat\theta) + R(x, \theta), where \mathcal(\theta) is the average observed information per observation, and R(x, \theta) denotes the residual term. To the extent that R(x, \theta) is negligible and \pi(\theta\mid M) is relatively linear near \widehat\theta, we can integrate out \theta to get the following: :p(x\mid M) \approx \hat L ^\frac , \mathcal(\widehat\theta), ^ \pi(\widehat\theta) As n increases, we can ignore , \mathcal(\widehat\theta), and \pi(\widehat\theta) as they are O(1). Thus, :p(x\mid M) = \exp\left(\ln\widehat L - \frac \ln(n) + O(1)\right) = \exp\left(-\frac + O(1)\right), where BIC is defined as above, and \widehat L either (a) is the Bayesian posterior mode or (b) uses the MLE and the prior \pi(\theta\mid M) has nonzero slope at the MLE. Then the posterior :p(M\mid x) \propto p(x\mid M) p(M) \approx \exp\left(-\frac\right) p(M)


Usage

When picking from several models, ones with lower BIC values are generally preferred. The BIC is an increasing function of the error variance \sigma_e^2 and an increasing function of ''k''. That is, unexplained variation in the dependent variable and the number of explanatory variables increase the value of BIC. However, a lower BIC does not necessarily indicate one model is better than another. Because it involves approximations, the BIC is merely a heuristic. In particular, differences in BIC should never be treated like transformed Bayes factors. It is important to keep in mind that the BIC can be used to compare estimated models only when the numerical values of the dependent variable are identical for all models being compared. The models being compared need not be
nested ''Nested'' is the seventh studio album by Bronx-born singer, songwriter and pianist Laura Nyro, released in 1978 on Columbia Records. Following on from her extensive tour to promote 1976's ''Smile'', which resulted in the 1977 live album '' Sea ...
, unlike the case when models are being compared using an F-test or a likelihood ratio test.


Properties

* The BIC generally penalizes free parameters more strongly than the Akaike information criterion, though it depends on the size of ''n'' and relative magnitude of ''n'' and ''k''. *It is independent of the prior. * It can measure the efficiency of the parameterized model in terms of predicting the data. * It penalizes the complexity of the model where complexity refers to the number of parameters in the model. * It is approximately equal to the minimum description length criterion but with negative sign. * It can be used to choose the number of clusters according to the intrinsic complexity present in a particular dataset. * It is closely related to other penalized likelihood criteria such as Deviance information criterion and the Akaike information criterion.


Limitations

The BIC suffers from two main limitations # the above approximation is only valid for sample size n much larger than the number k of parameters in the model. # the BIC cannot handle complex collections of models as in the variable selection (or feature selection) problem in high-dimension.


Gaussian special case

Under the assumption that the model errors or disturbances are independent and identically distributed according to a
normal distribution In statistics, a normal distribution or Gaussian distribution is a type of continuous probability distribution for a real-valued random variable. The general form of its probability density function is : f(x) = \frac e^ The parameter \mu ...
and the boundary condition that the derivative of the
log likelihood The likelihood function (often simply called the likelihood) represents the probability of random variable realizations conditional on particular values of the statistical parameters. Thus, when evaluated on a given sample, the likelihood functi ...
with respect to the true variance is zero, this becomes (''up to an additive constant'', which depends only on ''n'' and not on the model): (p. 375). : \mathrm = n \ln(\widehat) + k \ln(n) \ where \widehat is the error variance. The error variance in this case is defined as : \widehat = \frac \sum_^n (x_i-\widehat)^2. which is a biased estimator for the true variance. In terms of the residual sum of squares (RSS) the BIC is : \mathrm = n \ln(RSS/n) + k \ln(n) \ When testing multiple linear models against a saturated model, the BIC can be rewritten in terms of the deviance \chi^2 as:. : \mathrm= \chi^2 + k \ln(n) where k is the number of model parameters in the test.


See also

* Akaike information criterion * Bayes factor *
Bayesian model comparison The Bayes factor is a ratio of two competing statistical models represented by their marginal likelihood, and is used to quantify the support for one model over the other. The models in questions can have a common set of parameters, such as a nul ...
* Deviance information criterion * Hannan–Quinn information criterion * Jensen–Shannon divergence * Kullback–Leibler divergence * Minimum message length


Notes


References


Further reading

* * * * *


External links


Information Criteria and Model Selection

Sparse Vector Autoregressive Modeling
{{DEFAULTSORT:Bayesian Information Criterion Model selection Information criterion Regression variable selection de:Informationskriterium