G-test
   HOME

TheInfoList



OR:

In
statistics Statistics (from German language, German: ', "description of a State (polity), state, a country") is the discipline that concerns the collection, organization, analysis, interpretation, and presentation of data. In applying statistics to a s ...
, ''G''-tests are likelihood-ratio or maximum likelihood
statistical significance In statistical hypothesis testing, a result has statistical significance when a result at least as "extreme" would be very infrequent if the null hypothesis were true. More precisely, a study's defined significance level, denoted by \alpha, is the ...
tests that are increasingly being used in situations where
chi-squared test A chi-squared test (also chi-square or test) is a Statistical hypothesis testing, statistical hypothesis test used in the analysis of contingency tables when the sample sizes are large. In simpler terms, this test is primarily used to examine w ...
s were previously recommended.


Formulation

The general formula for ''G'' is : G = 2\sum_ , where O_i \geq 0 is the observed count in a cell, E_i > 0 is the expected count under the null hypothesis, \ln denotes the
natural logarithm The natural logarithm of a number is its logarithm to the base of a logarithm, base of the e (mathematical constant), mathematical constant , which is an Irrational number, irrational and Transcendental number, transcendental number approxima ...
, and the sum is taken over all non-empty cells. The resulting G is chi-squared distributed. Furthermore, the total observed count should be equal to the total expected count:\sum_i O_i = \sum_i E_i = Nwhere N is the total number of observations.


Derivation

We can derive the value of the ''G''-test from the log-likelihood ratio test where the underlying model is a multinomial model. Suppose we had a sample x = (x_1, \ldots, x_m) where each x_i is the number of times that an object of type i was observed. Furthermore, let n = \sum_^m x_i be the total number of objects observed. If we assume that the underlying model is multinomial, then the test statistic is defined by\ln \left( \frac \right) = \ln \left( \frac \right)where \tilde is the null hypothesis and \hat is the maximum likelihood estimate (MLE) of the parameters given the data. Recall that for the multinomial model, the MLE of \hat_i given some data is defined by\hat_i = \fracFurthermore, we may represent each null hypothesis parameter \tilde_i as\tilde_i = \fracThus, by substituting the representations of \tilde and \hat in the log-likelihood ratio, the equation simplifies to\begin \ln \left( \frac \right) &= \ln \prod_^m \left(\frac\right)^ \\ &= \sum_^m x_i \ln\left(\frac\right) \\ \endRelabel the variables e_i with E_i and x_i with O_i. Finally, multiply by a factor of -2 (used to make the G test formula asymptotically equivalent to the Pearson's chi-squared test formula) to achieve the form \begin G & = & \; -2 \sum_^m O_i \ln\left(\frac\right) \\ & = & 2 \sum_^m O_i \ln\left(\frac\right) \end Heuristically, one can imagine ~ O_i ~ as continuous and approaching zero, in which case ~ O_i \ln O_i \to 0 ~, and terms with zero observations can simply be dropped. However the ''expected'' count in each cell must be strictly greater than zero for each cell (~ E_i > 0 ~ \forall \, i ~) to apply the method.


Distribution and use

Given the null hypothesis that the observed frequencies result from random sampling from a distribution with the given expected frequencies, the distribution of ''G'' is approximately a chi-squared distribution, with the same number of degrees of freedom as in the corresponding chi-squared test. For very small samples the multinomial test for goodness of fit, and Fisher's exact test for contingency tables, or even Bayesian hypothesis selection are preferable to the ''G''-test. McDonald recommends to always use an exact test (exact test of goodness-of-fit, Fisher's exact test) if the total sample size is less than 1 000 . :There is nothing magical about a sample size of 1 000, it's just a nice round number that is well within the range where an exact test, chi-square test, and ''G''–test will give almost identical  values. Spreadsheets, web-page calculators, and SAS shouldn't have any problem doing an exact test on a sample size of 1 000 . :::: — John H. McDonald (2014) ''G''-tests have been recommended at least since the 1981 edition of ''Biometry'', a statistics textbook by Robert R. Sokal and F. James Rohlf.


Relation to other metrics


Relation to the chi-squared test

The commonly used
chi-squared test A chi-squared test (also chi-square or test) is a Statistical hypothesis testing, statistical hypothesis test used in the analysis of contingency tables when the sample sizes are large. In simpler terms, this test is primarily used to examine w ...
s for goodness of fit to a distribution and for independence in contingency tables are in fact approximations of the log-likelihood ratio on which the ''G''-tests are based. The general formula for Pearson's chi-squared test statistic is : \chi^2 = \sum_ ~. The approximation of ''G'' by chi squared is obtained by a second order Taylor expansion of the natural logarithm around 1 (see #Derivation (chi-squared) below). We have G \approx \chi^2 when the observed counts ~ O_i ~ are close to the expected counts ~ E_i ~. When this difference is large, however, the ~ \chi^2 ~ approximation begins to break down. Here, the effects of outliers in data will be more pronounced, and this explains the why ~ \chi^2 ~ tests fail in situations with little data. For samples of a reasonable size, the ''G''-test and the chi-squared test will lead to the same conclusions. However, the approximation to the theoretical chi-squared distribution for the ''G''-test is better than for the Pearson's chi-squared test. In cases where ~ O_i > 2 \cdot E_i ~ for some cell case the ''G''-test is always better than the chi-squared test. For testing goodness-of-fit the ''G''-test is infinitely more efficient than the chi squared test in the sense of Bahadur, but the two tests are equally efficient in the sense of Pitman or in the sense of Hodges and Lehmann.


Derivation (chi-squared)

Consider : G = 2\sum_ ~, and let O_i = E_i + \delta_i with \sum_i \delta_i = 0 ~, so that the total number of counts remains the same. Upon substitution we find, : G = 2\sum_ ~. A Taylor expansion around 1+\frac can be performed using \ln(1 + x) = x - \fracx^2 + \mathcal(x^3) . The result is : G = 2\sum_ (E_i + \delta_i) \left(\frac - \frac\frac + \mathcal\left(\delta_i^3\right) \right) ~, and distributing terms we find, : G = 2\sum_ \delta_i + \frac\frac + \mathcal\left(\delta_i^3\right)~. Now, using the fact that ~ \sum_ \delta_i = 0 ~ and ~ \delta_i = O_i - E_i ~, we can write the result, :~ G \approx \sum_ \frac ~.


Relation to Kullback–Leibler divergence

The ''G''-test statistic is proportional to the Kullback–Leibler divergence of the theoretical distribution from the empirical distribution: : \begin G &= 2\sum_ = 2 N \sum_ \\ &= 2 N \, D_(o\, e), \end where ''N'' is the total number of observations and o_i and e_i are the empirical and theoretical frequencies, respectively.


Relation to mutual information

For analysis of contingency tables the value of ''G'' can also be expressed in terms of mutual information. Let :N = \sum_ \; , \; \pi_ = \frac \; , \; \pi_ = \frac \; , and \; \pi_ = \frac \;. Then ''G'' can be expressed in several alternative forms: : G = 2 \cdot N \cdot \sum_ , : G = 2 \cdot N \cdot \left H(r) + H(c) - H(r,c) \right, : G = 2 \cdot N \cdot \operatorname(r,c) \, , where the entropy of a discrete random variable X \, is defined as : H(X) = - \, , and where : \operatorname(r,c)= H(r) + H(c) - H(r,c) \, is the mutual information between the row vector ''r'' and the column vector ''c'' of the contingency table. It can also be shown that the inverse document frequency weighting commonly used for text retrieval is an approximation of ''G'' applicable when the row sum for the query is much smaller than the row sum for the remainder of the corpus. Similarly, the result of Bayesian inference applied to a choice of single multinomial distribution for all rows of the contingency table taken together versus the more general alternative of a separate multinomial per row produces results very similar to the ''G'' statistic.


Application

* The McDonald–Kreitman test in statistical genetics is an application of the ''G''-test. * Dunning introduced the test to the computational linguistics community where it is now widely used. * The R-scape program (used by Rfam) uses G-test to detect co-variation between RNA sequence alignment positions.


Statistical software

* In R fast implementations can be found in th
AMR
an
Rfast
packages. For the AMR package, the command is g.test which works exactly like chisq.test from base R. R also has th

function in th

package. Note: Fisher's ''G''-test in th
GeneCycle Package
of the R programming language (fisher.g.test) does not implement the ''G''-test as described in this article, but rather Fisher's exact test of Gaussian white-noise in a time series. * Another R implementation to compute the G statistic and corresponding p-values is provided by the R packag
entropy
The commands are Gstat for the standard G statistic and the associated p-value and Gstatindep for the G statistic applied to comparing joint and product distributions to test independence. * In SAS, one can conduct ''G''-test by applying the /chisq option after the proc freq. * In Stata, one can conduct a ''G''-test by applying the lr option after the tabulate command. * In
Java Java is one of the Greater Sunda Islands in Indonesia. It is bordered by the Indian Ocean to the south and the Java Sea (a part of Pacific Ocean) to the north. With a population of 156.9 million people (including Madura) in mid 2024, proje ...
, use org.apache.commons.math3.stat.inference.GTest. * In Python, use scipy.stats.power_divergence with lambda_=0.


References


External links


G2/Log-likelihood calculator
{{DEFAULTSORT:G-test Statistical tests for contingency tables