Goodness of fit
   HOME

TheInfoList



OR:

The goodness of fit of a
statistical model A statistical model is a mathematical model that embodies a set of statistical assumptions concerning the generation of sample data (and similar data from a larger population). A statistical model represents, often in considerably idealized form ...
describes how well it fits a set of observations. Measures of goodness of fit typically summarize the discrepancy between observed values and the values expected under the model in question. Such measures can be used in
statistical hypothesis testing A statistical hypothesis test is a method of statistical inference used to decide whether the data at hand sufficiently support a particular hypothesis. Hypothesis testing allows us to make probabilistic statements about population parameters. ...
, e.g. to test for normality of residuals, to test whether two samples are drawn from identical distributions (see Kolmogorov–Smirnov test), or whether outcome frequencies follow a specified distribution (see
Pearson's chi-square test Pearson's chi-squared test (\chi^2) is a statistical test applied to sets of categorical data to evaluate how likely it is that any observed difference between the sets arose by chance. It is the most widely used of many chi-squared tests (e.g., ...
). In the
analysis of variance Analysis of variance (ANOVA) is a collection of statistical models and their associated estimation procedures (such as the "variation" among and between groups) used to analyze the differences among means. ANOVA was developed by the statistician ...
, one of the components into which the variance is partitioned may be a
lack-of-fit sum of squares In statistics, a sum of squares due to lack of fit, or more tersely a lack-of-fit sum of squares, is one of the components of a partition of the sum of squares of residuals in an analysis of variance, used in the numerator in an F-test of the nu ...
.


Fit of distributions

In assessing whether a given distribution is suited to a data-set, the following
test Test(s), testing, or TEST may refer to: * Test (assessment), an educational assessment intended to measure the respondents' knowledge or other abilities Arts and entertainment * ''Test'' (2013 film), an American film * ''Test'' (2014 film), ...
s and their underlying measures of fit can be used: * Bayesian information criterion *
Kolmogorov–Smirnov test In statistics, the Kolmogorov–Smirnov test (K–S test or KS test) is a nonparametric test of the equality of continuous (or discontinuous, see Section 2.2), one-dimensional probability distributions that can be used to compare a sample with ...
*
Cramér–von Mises criterion In statistics the Cramér–von Mises criterion is a criterion used for judging the goodness of fit of a cumulative distribution function F^* compared to a given empirical distribution function F_n, or for comparing two empirical distributions. I ...
*
Anderson–Darling test The Anderson–Darling test is a statistical test of whether a given sample of data is drawn from a given probability distribution. In its basic form, the test assumes that there are no parameters to be estimated in the distribution being tested, ...
* Shapiro–Wilk test *
Chi-squared test A chi-squared test (also chi-square or test) is a statistical hypothesis test used in the analysis of contingency tables when the sample sizes are large. In simpler terms, this test is primarily used to examine whether two categorical variables ...
* Akaike information criterion * Hosmer–Lemeshow test * Kuiper's test *Kernelized Stein discrepancy *Zhang's ZK, ZC and ZA tests * Moran test *Density Based Empirical Likelihood Ratio tests


Regression analysis

In regression analysis, the following topics relate to goodness of fit: * Coefficient of determination (the R-squared measure of goodness of fit); *
Lack-of-fit sum of squares In statistics, a sum of squares due to lack of fit, or more tersely a lack-of-fit sum of squares, is one of the components of a partition of the sum of squares of residuals in an analysis of variance, used in the numerator in an F-test of the nu ...
; * Reduced chi-square *
Regression validation In statistics, regression validation is the process of deciding whether the numerical results quantifying hypothesized relationships between variables, obtained from regression analysis, are acceptable as descriptions of the data. The validation ...
* Mallows's Cp criterion


Categorical data

The following are examples that arise in the context of categorical data.


Pearson's chi-square test

Pearson's chi-square test Pearson's chi-squared test (\chi^2) is a statistical test applied to sets of categorical data to evaluate how likely it is that any observed difference between the sets arose by chance. It is the most widely used of many chi-squared tests (e.g., ...
uses a measure of goodness of fit which is the sum of differences between observed and expected outcome frequencies (that is, counts of observations), each squared and divided by the expectation: \chi^2 = \sum_^n where: *''Oi'' = an observed count for bin ''i'' *''Ei'' = an expected count for bin ''i'', asserted by the
null hypothesis In scientific research, the null hypothesis (often denoted ''H''0) is the claim that no difference or relationship exists between two sets of data or variables being analyzed. The null hypothesis is that any experimentally observed difference is ...
. The expected frequency is calculated by: E_i \, = \, \bigg( F(Y_u) \, - \, F(Y_l) \bigg) \, N where: *''F'' = the cumulative distribution function for the probability distribution being tested. *''Yu'' = the upper limit for class ''i'', *''Yl'' = the lower limit for class ''i'', and *''N'' = the sample size The resulting value can be compared with a
chi-square distribution In probability theory and statistics, the chi-squared distribution (also chi-square or \chi^2-distribution) with k degrees of freedom is the distribution of a sum of the squares of k independent standard normal random variables. The chi-squ ...
to determine the goodness of fit. The chi-square distribution has (''k'' − ''c'') degrees of freedom, where ''k'' is the number of non-empty cells and ''c'' is the number of estimated parameters (including location and scale parameters and shape parameters) for the distribution plus one. For example, for a 3-parameter Weibull distribution, ''c'' = 4.


Example: equal frequencies of men and women

For example, to test the hypothesis that a random sample of 100 people has been drawn from a population in which men and women are equal in frequency, the observed number of men and women would be compared to the theoretical frequencies of 50 men and 50 women. If there were 44 men in the sample and 56 women, then \chi^2 = + = 1.44 If the null hypothesis is true (i.e., men and women are chosen with equal probability in the sample), the test statistic will be drawn from a chi-square distribution with one degree of freedom. Though one might expect two degrees of freedom (one each for the men and women), we must take into account that the total number of men and women is constrained (100), and thus there is only one degree of freedom (2 − 1). In other words, if the male count is known the female count is determined, and vice versa. Consultation of the
chi-square distribution In probability theory and statistics, the chi-squared distribution (also chi-square or \chi^2-distribution) with k degrees of freedom is the distribution of a sum of the squares of k independent standard normal random variables. The chi-squ ...
for 1 degree of freedom shows that the cumulative
probability Probability is the branch of mathematics concerning numerical descriptions of how likely an event is to occur, or how likely it is that a proposition is true. The probability of an event is a number between 0 and 1, where, roughly speaking, ...
of observing a difference more than \chi^2=1.44 if men and women are equally numerous in the population is approximately 0.23. This probability is higher than the conventionally accepted criteria for
statistical significance In statistical hypothesis testing, a result has statistical significance when it is very unlikely to have occurred given the null hypothesis (simply by chance alone). More precisely, a study's defined significance level, denoted by \alpha, is the p ...
(a probability of .001-.05), so normally we would not reject the null hypothesis that the number of men in the population is the same as the number of women (i.e. we would consider our sample within the range of what we'd expect for a 50/50 male/female ratio.) Note the assumption that the mechanism that has generated the sample is random, in the sense of independent random selection with the same probability, here 0.5 for both males and females. If, for example, each of the 44 males selected brought a male buddy, and each of the 56 females brought a female buddy, each ^2 will increase by a factor of 4, while each E_i will increase by a factor of 2. The value of the statistic will double to 2.88. Knowing this underlying mechanism, we should of course be counting pairs. In general, the mechanism, if not defensibly random, will not be known. The distribution to which the test statistic should be referred may, accordingly, be very different from chi-square.


Binomial case

A binomial experiment is a sequence of independent trials in which the trials can result in one of two outcomes, success or failure. There are ''n'' trials each with probability of success, denoted by ''p''. Provided that ''np''''i'' ≫ 1 for every ''i'' (where ''i'' = 1, 2, ..., ''k''), then \chi^2 = \sum_^ = \sum_^ . This has approximately a chi-square distribution with ''k'' − 1 degrees of freedom. The fact that there are ''k'' − 1 degrees of freedom is a consequence of the restriction \sum N_i=n. We know there are ''k'' observed cell counts, however, once any ''k'' − 1 are known, the remaining one is uniquely determined. Basically, one can say, there are only ''k'' − 1 freely determined cell counts, thus ''k'' − 1 degrees of freedom.


''G''-test

''G''-tests are likelihood-ratio tests of
statistical significance In statistical hypothesis testing, a result has statistical significance when it is very unlikely to have occurred given the null hypothesis (simply by chance alone). More precisely, a study's defined significance level, denoted by \alpha, is the p ...
that are increasingly being used in situations where Pearson's chi-square tests were previously recommended. The general formula for ''G'' is : G = 2\sum_ , where O_i and E_i are the same as for the chi-square test, \ln denotes the natural logarithm, and the sum is taken over all non-empty cells. Furthermore, the total observed count should be equal to the total expected count:\sum_i O_i = \sum_i E_i = Nwhere N is the total number of observations. ''G''-tests have been recommended at least since the 1981 edition of the popular statistics textbook by Robert R. Sokal and F. James Rohlf.


See also

*
All models are wrong All or ALL may refer to: Language * All, an indefinite pronoun in English * All, one of the English determiners * Allar language (ISO 639-3 code) * Allative case (abbreviated ALL) Music * All (band), an American punk rock band * ''All'' (All ...
* Deviance (statistics) (related to GLM) * Overfitting *
Statistical model validation In statistics, model validation is the task of evaluating whether a chosen statistical model is appropriate or not. Oftentimes in statistical inference, inferences from models that appear to fit their data may be flukes, resulting in a misunderstan ...
*
Theil–Sen estimator In non-parametric statistics, the Theil–Sen estimator is a method for robustly fitting a line to sample points in the plane ( simple linear regression) by choosing the median of the slopes of all lines through pairs of points. It has also ...


References


Further reading

* * * *{{citation , author1-first= Albert , author1-last= Vexler , author2-first= Gregory , author2-last= Gurevich , title= Empirical likelihood ratios applied to goodness-of-fit tests based on sample entropy , journal= Computational Statistics & Data Analysis , year= 2010 , volume= 54 , issue= 2 , pages= 531–545 , doi= 10.1016/j.csda.2009.09.025 Statistical theory