Extensions Of Fisher's Method
   HOME

TheInfoList



OR:

In
statistics Statistics (from German language, German: ''wikt:Statistik#German, Statistik'', "description of a State (polity), state, a country") is the discipline that concerns the collection, organization, analysis, interpretation, and presentation of ...
, extensions of Fisher's method are a group of approaches that allow approximately valid statistical inferences to be made when the assumptions required for the direct application of Fisher's method are not valid. Fisher's method is a way of combining the information in the p-values from different statistical tests so as to form a single overall test: this method requires that the individual test statistics (or, more immediately, their resulting p-values) should be statistically independent.


Dependent statistics

A principal limitation of Fisher's method is its exclusive design to combine independent p-values, which renders it an unreliable technique to combine dependent p-values. To overcome this limitation, a number of methods were developed to extend its utility.


Known covariance


Brown's method

Fisher's method showed that the log-sum of ''k'' independent
p-value In null-hypothesis significance testing, the ''p''-value is the probability of obtaining test results at least as extreme as the result actually observed, under the assumption that the null hypothesis is correct. A very small ''p''-value means ...
s follow a ''χ''2-distribution with 2''k'' degrees of freedom: : X = -2\sum_^k \log_e(p_i) \sim \chi^2(2k) . In the case that these p-values are not independent, Brown proposed the idea of approximating ''X'' using a scaled ''χ''2-distribution, ''cχ''2(''k’''), with ''k’'' degrees of freedom. The mean and variance of this scaled ''χ''2 variable are: : \operatorname \chi^2(k')= ck' , : \operatorname \chi^2(k')= 2c^2k' . where c=\operatorname(X)/(2\operatorname and k'=2(\operatorname ^2/\operatorname(X). This approximation is shown to be accurate up to two moments.


Unknown covariance


Harmonic mean ''p-''value

The harmonic mean ''p''-value offers an alternative to Fisher's method for combining ''p''-values when the dependency structure is unknown but the tests cannot be assumed to be independent.


Kost's method: ''t'' approximation

This method requires the test statistics' covariance structure to be known up to a scalar multiplicative constant.


Cauchy combination test

This is conceptually similar to Fisher's method: it computes a sum of transformed ''p''-values. Unlike Fisher's method, which uses a log transformation to obtain a test statistic which has a chi-squared distribution under the null, the Cauchy combination test uses a tan transformation to obtain a test statistic whose tail is asymptotic to that of a Cauchy distribution under the null. The test statistic is: : X = \sum_^k \omega_i \tan 0.5-p_i)\pi, where \omega_i are non-negative weights, subject to \sum_^k \omega_i = 1 . Under the null, p_i are uniformly distributed, therefore \tan 0.5-p_i)\pi/math> are Cauchy distributed. Under some mild assumptions, but allowing for arbitrary dependency between the p_i, the tail of the distribution of ''X'' is asymptotic to that of a Cauchy distribution. More precisely, letting ''W'' denote a standard Cauchy random variable: : \lim_ \frac = 1. This leads to a combined hypothesis test, in which ''X'' is compared to the quantiles of the Cauchy distribution.


References

Multiple comparisons {{Statistics-stub