Focused Information Criterion
   HOME

TheInfoList



OR:

In
statistics Statistics (from German language, German: ', "description of a State (polity), state, a country") is the discipline that concerns the collection, organization, analysis, interpretation, and presentation of data. In applying statistics to a s ...
, the focused information criterion (FIC) is a method for selecting the most appropriate model among a set of competitors for a given data set. Unlike most other
model selection Model selection is the task of selecting a model from among various candidates on the basis of performance criterion to choose the best one. In the context of machine learning and more generally statistical analysis, this may be the selection of ...
strategies, like the
Akaike information criterion The Akaike information criterion (AIC) is an estimator of prediction error and thereby relative quality of statistical models for a given set of data. Given a collection of models for the data, AIC estimates the quality of each model, relative to ...
(AIC), the
Bayesian information criterion In statistics, the Bayesian information criterion (BIC) or Schwarz information criterion (also SIC, SBC, SBIC) is a criterion for model selection among a finite set of models; models with lower BIC are generally preferred. It is based, in part, on ...
(BIC) and the
deviance information criterion The deviance information criterion (DIC) is a hierarchical modeling generalization of the Akaike information criterion (AIC). It is particularly useful in Bayesian model selection problems where the posterior distributions of the models have been ...
(DIC), the FIC does not attempt to assess the overall fit of candidate models but focuses attention directly on the parameter of primary interest with the statistical analysis, say \mu , for which competing models lead to different estimates, say \hat\mu_j for model j . The FIC method consists in first developing an exact or approximate expression for the precision or quality of each
estimator In statistics, an estimator is a rule for calculating an estimate of a given quantity based on Sample (statistics), observed data: thus the rule (the estimator), the quantity of interest (the estimand) and its result (the estimate) are distinguish ...
, say r_j for \hat\mu_j , and then use data to estimate these precision measures, say \hat r_j . In the end the model with best estimated precision is selected. The FIC methodology was developed by
Gerda Claeskens Gerda Claeskens is a Belgian statistician. She is a professor of statistics in the Faculty of Economics and Business at KU Leuven, associated with the KU Research Centre for Operations Research and Business Statistics (ORSTAT). Contributions Claes ...
and
Nils Lid Hjort Nils Lid Hjort (born 12 January 1953) is a Norwegian statistician, who has been a professor of mathematical statistics at the University of Oslo since 1991. Hjort's research themes are varied, with particularly noteworthy contributions in the fi ...
, first in two 2003 discussion articles in ''
Journal of the American Statistical Association The ''Journal of the American Statistical Association'' is a quarterly peer-reviewed scientific journal published by Taylor & Francis on behalf of the American Statistical Association. It covers work primarily focused on the application of statis ...
'' and later on in other papers and in their 2008 book. The concrete formulae and implementation for FIC depend firstly on the particular parameter of interest, the choice of which does not depend on mathematics but on the scientific and statistical context. Thus the FIC apparatus may be selecting one model as most appropriate for estimating a quantile of a distribution but preferring another model as best for estimating the mean value. Secondly, the FIC formulae depend on the specifics of the models used for the observed data and also on how precision is to be measured. The clearest case is where precision is taken to be
mean squared error In statistics, the mean squared error (MSE) or mean squared deviation (MSD) of an estimator (of a procedure for estimating an unobserved quantity) measures the average of the squares of the errors—that is, the average squared difference betwee ...
, say r_j = b_j^2 + \tau_j^2 in terms of squared bias and
variance In probability theory and statistics, variance is the expected value of the squared deviation from the mean of a random variable. The standard deviation (SD) is obtained as the square root of the variance. Variance is a measure of dispersion ...
for the estimator associated with model j . FIC formulae are then available in a variety of situations, both for handling parametric,
semiparametric In statistics, a semiparametric model is a statistical model that has Parametric statistics, parametric and nonparametric components. A statistical model is a parameterized family of distributions: \ indexed by a statistical parameter, parameter \t ...
and
nonparametric Nonparametric statistics is a type of statistical analysis that makes minimal assumptions about the underlying distribution of the data being studied. Often these models are infinite-dimensional, rather than finite dimensional, as in parametric sta ...
situations, involving separate estimation of squared bias and variance, leading to estimated precision \hat r_j . In the end the FIC selects the model with smallest estimated mean squared error. Associated with the use of the FIC for selecting a good model is the ''FIC plot'', designed to give a clear and informative picture of all estimates, across all candidate models, and their merit. It displays estimates on the y axis along with FIC scores on the x axis; thus estimates found to the left in the plot are associated with the better models and those found in the middle and to the right stem from models less or not adequate for the purpose of estimating the focus parameter in question. Generally speaking, complex models (with many parameters relative to
sample size Sample size determination or estimation is the act of choosing the number of observations or replicates to include in a statistical sample. The sample size is an important feature of any empirical study in which the goal is to make inferences abo ...
) tend to lead to estimators with small bias but high variance; more parsimonious models (with fewer parameters) typically yield estimators with larger bias but smaller variance. The FIC method balances the two desired data of having small bias and small variance in an optimal fashion. The main difficulty lies with the bias b_j , as it involves the distance from the expected value of the estimator to the true underlying quantity to be estimated, and the true data generating mechanism may lie outside each of the candidate models. In situations where there is not a unique focus parameter, but rather a family of such, there are versions of ''average FIC'' (AFIC or wFIC) that find the best model in terms of suitably weighted performance measures, e.g. when searching for a
regression Regression or regressions may refer to: Arts and entertainment * ''Regression'' (film), a 2015 horror film by Alejandro Amenábar, starring Ethan Hawke and Emma Watson * ''Regression'' (magazine), an Australian punk rock fanzine (1982–1984) * ...
model to perform particularly well in a portion of the
covariate A variable is considered dependent if it depends on (or is hypothesized to depend on) an independent variable. Dependent variables are studied under the supposition or demand that they depend, by some law or rule (e.g., by a mathematical function ...
space. It is also possible to keep several of the best models on board, ending the statistical analysis with a data-dicated weighted average of the estimators of the best FIC scores, typically giving highest weight to estimators associated with the best FIC scores. Such schemes of ''model averaging'' extend the direct FIC selection method. The FIC methodology applies in particular to selection of variables in different forms of regression analysis, including the framework of generalised linear models and the semiparametric
proportional hazards models Proportional hazards models are a class of survival models in statistics. Survival models relate the time that passes, before some event occurs, to one or more covariates that may be associated with that quantity of time. In a proportional haz ...
(i.e. Cox regression).


See also

*
Akaike information criterion The Akaike information criterion (AIC) is an estimator of prediction error and thereby relative quality of statistical models for a given set of data. Given a collection of models for the data, AIC estimates the quality of each model, relative to ...
*
Bayesian information criterion In statistics, the Bayesian information criterion (BIC) or Schwarz information criterion (also SIC, SBC, SBIC) is a criterion for model selection among a finite set of models; models with lower BIC are generally preferred. It is based, in part, on ...
*
Deviance information criterion The deviance information criterion (DIC) is a hierarchical modeling generalization of the Akaike information criterion (AIC). It is particularly useful in Bayesian model selection problems where the posterior distributions of the models have been ...
*
Hannan–Quinn information criterion In statistics, the Hannan–Quinn information criterion (HQC) is a criterion for model selection. It is an alternative to Akaike information criterion (AIC) and Bayesian information criterion (BIC). It is given as : \mathrm = -2 L_ + 2 k \ln(\ln(n ...
*
Shibata information criterion Shibata may refer to: Places * Shibata, Miyagi, a town in Miyagi Prefecture * Shibata District, Miyagi, a district in Miyagi Prefecture * Shibata, Niigata, a city in Niigata Prefecture ** Shibata Station (Niigata), a railway station in Niigata ...


References

* Claeskens, G. and Hjort, N.L. (2003). "The focused information criterion" (with discussion). ''
Journal of the American Statistical Association The ''Journal of the American Statistical Association'' is a quarterly peer-reviewed scientific journal published by Taylor & Francis on behalf of the American Statistical Association. It covers work primarily focused on the application of statis ...
'', volume 98, pp. 879–899. * Hjort, N.L. and Claeskens, G. (2003). "Frequentist model average estimators" (with discussion). ''
Journal of the American Statistical Association The ''Journal of the American Statistical Association'' is a quarterly peer-reviewed scientific journal published by Taylor & Francis on behalf of the American Statistical Association. It covers work primarily focused on the application of statis ...
'', volume 98, pp. 900–916. * Hjort, N.L. and Claeskens, G. (2006). "Focused information criteria and model averaging for the Cox hazard regression model." ''
Journal of the American Statistical Association The ''Journal of the American Statistical Association'' is a quarterly peer-reviewed scientific journal published by Taylor & Francis on behalf of the American Statistical Association. It covers work primarily focused on the application of statis ...
'', volume 101, pp. 1449–1464. {{doi, 10.1198/016214506000000069 * Claeskens, G. and Hjort, N.L. (2008). ''Model Selection and Model Averaging.''
Cambridge University Press Cambridge University Press was the university press of the University of Cambridge. Granted a letters patent by King Henry VIII in 1534, it was the oldest university press in the world. Cambridge University Press merged with Cambridge Assessme ...
.


External links


Interview on frequentist model averaging
with Essential Science Indicators
Webpage for Model Selection and Model Averaging
the Claeskens and Hjort book Regression variable selection Model selection