Kruskal–Wallis one-way analysis of variance
   HOME

TheInfoList



OR:

The Kruskal–Wallis test by ranks, Kruskal–Wallis ''H'' testKruskal–Wallis H Test using SPSS Statistics
Laerd Statistics
(named after
William Kruskal William Henry Kruskal (; October 10, 1919 – April 21, 2005) was an American mathematician and statistician. He is best known for having formulated the Kruskal–Wallis one-way analysis of variance (together with W. Allen Wallis), a widely used ...
and
W. Allen Wallis Wilson Allen Wallis (November 5, 1912 – October 12, 1998) was an American economist and statistician who served as president of the University of Rochester. He is best known for the Kruskal–Wallis one-way analysis of variance, which is named ...
), or one-way ANOVA on ranks is a
non-parametric Nonparametric statistics is the branch of statistics that is not based solely on parametrized families of probability distributions (common examples of parameters are the mean and variance). Nonparametric statistics is based on either being distri ...
method for testing whether samples originate from the same distribution. It is used for comparing two or more independent samples of equal or different sample sizes. It extends the Mann–Whitney ''U'' test, which is used for comparing only two groups. The parametric equivalent of the Kruskal–Wallis test is the
one-way analysis of variance In statistics, one-way analysis of variance (abbreviated one-way ANOVA) is a technique that can be used to compare whether two sample's means are significantly different or not (using the F distribution). This technique can be used only for numeric ...
(ANOVA). A significant Kruskal–Wallis test indicates that at least one sample stochastically dominates one other sample. The test does not identify where this stochastic dominance occurs or for how many pairs of groups stochastic dominance obtains. For analyzing the specific sample pairs for stochastic dominance, Dunn's test, pairwise Mann–Whitney tests with
Bonferroni correction In statistics, the Bonferroni correction is a method to counteract the multiple comparisons problem. Background The method is named for its use of the Bonferroni inequalities. An extension of the method to confidence intervals was proposed by Oliv ...
, or the more powerful but less well known Conover–Iman test are sometimes used. Since it is a nonparametric method, the Kruskal–Wallis test does not assume a
normal distribution In statistics, a normal distribution or Gaussian distribution is a type of continuous probability distribution for a real-valued random variable. The general form of its probability density function is : f(x) = \frac e^ The parameter \mu ...
of the residuals, unlike the analogous one-way analysis of variance. If the researcher can make the assumptions of an identically shaped and scaled distribution for all groups, except for any difference in medians, then the null hypothesis is that the medians of all groups are equal, and the alternative hypothesis is that at least one population median of one group is different from the population median of at least one other group. Otherwise, it is impossible to say, whether the rejection of the null hypothesis comes from the shift in locations or group dispersions. This is the same issue that happens also with the Mann-Whitney test.


Method

# Rank all data from all groups together; i.e., rank the data from 1 to ''N'' ignoring group membership. Assign any tied values the average of the ranks they would have received had they not been tied. # The test statistic is given by: #:H = (N-1)\frac, where: #*N is the total number of observations across all groups #*g is the number of groups #*n_i is the number of observations in group i #*r_ is the rank (among all observations) of observation j from group i #*\bar_ = \frac is the average rank of all observations in group i #*\bar =\tfrac 12 (N+1) is the average of all the r_. # If the data contain no ties the denominator of the expression for H is exactly (N-1)N(N+1)/12 and \bar=\tfrac. Thus #: \begin H & = \frac\sum_^g n_i \left(\bar_ - \frac\right)^2 \\ & = \frac\sum_^g n_i \bar_^2 -\ 3(N+1) \end
The last formula only contains the squares of the average ranks. # A correction for ties if using the short-cut formula described in the previous point can be made by dividing H by 1 - \frac, where ''G'' is the number of groupings of different tied ranks, and ''t''''i'' is the number of tied values within group ''i'' that are tied at a particular value. This correction usually makes little difference in the value of ''H'' unless there are a large number of ties. # Finally, the decision to reject or not the null hypothesis is made by comparing H to a critical value H_c obtained from a table or a software for a given significance or alpha level. If H is bigger than H_c, the null hypothesis is rejected. If possible (no ties, sample not too big) one should compare H to the critical value obtained from the exact distribution of H. Otherwise, the distribution of H can be approximated by a
chi-squared distribution In probability theory and statistics, the chi-squared distribution (also chi-square or \chi^2-distribution) with k degrees of freedom is the distribution of a sum of the squares of k independent standard normal random variables. The chi-squa ...
with g-1 degrees of freedom. If some n_i values are small (i.e., less than 5) the exact
probability distribution In probability theory and statistics, a probability distribution is the mathematical function that gives the probabilities of occurrence of different possible outcomes for an experiment. It is a mathematical description of a random phenomenon i ...
of H can be quite different from this
chi-squared distribution In probability theory and statistics, the chi-squared distribution (also chi-square or \chi^2-distribution) with k degrees of freedom is the distribution of a sum of the squares of k independent standard normal random variables. The chi-squa ...
. If a table of the chi-squared probability distribution is available, the critical value of chi-squared, \chi^2_, can be found by entering the table at ''g'' − 1
degrees of freedom Degrees of freedom (often abbreviated df or DOF) refers to the number of independent variables or parameters of a thermodynamic system. In various scientific fields, the word "freedom" is used to describe the limits to which physical movement or ...
and looking under the desired significance or alpha level. # If the statistic is not significant, then there is no evidence of stochastic dominance between the samples. However, if the test is significant then at least one sample stochastically dominates another sample. Therefore, a researcher might use sample contrasts between individual sample pairs, or ''post hoc'' tests using Dunn's test, which (1) properly employs the same rankings as the Kruskal–Wallis test, and (2) properly employs the pooled variance implied by the null hypothesis of the Kruskal–Wallis test in order to determine which of the sample pairs are significantly different. When performing multiple sample contrasts or tests, the Type I error rate tends to become inflated, raising concerns about
multiple comparisons In statistics, the multiple comparisons, multiplicity or multiple testing problem occurs when one considers a set of statistical inferences simultaneously or infers a subset of parameters selected based on the observed values. The more inferences ...
.


Exact probability tables

A large amount of computing resources is required to compute exact probabilities for the Kruskal–Wallis test. Existing software only provides exact probabilities for sample sizes less than about 30 participants. These software programs rely on asymptotic approximation for larger sample sizes. Exact probability values for larger sample sizes are available. Spurrier (2003) published exact probability tables for samples as large as 45 participants. Meyer and Seaman (2006) produced exact probability distributions for samples as large as 105 participants.


Exact distribution of H

Choi et al. made a review of two methods that had been developed to compute the exact distribution of H, proposed a new one, and compared the exact distribution to its chi-squared approximation.


See also

*
Friedman test The Friedman test is a non-parametric statistical test developed by Milton Friedman. Similar to the parametric repeated measures ANOVA, it is used to detect differences in treatments across multiple test attempts. The procedure involves ranking ...
* Jonckheere's trend test


References


Further reading

*


External links


An online version of the test
{{DEFAULTSORT:Kruskal-Wallis one-way analysis of variance Statistical tests Analysis of variance Nonparametric statistics