Mann–Whitney U test
   HOME

TheInfoList



OR:

In statistics, the Mann–Whitney ''U'' test (also called the Mann–Whitney–Wilcoxon (MWW/MWU), Wilcoxon rank-sum test, or Wilcoxon–Mann–Whitney test) is a
nonparametric Nonparametric statistics is the branch of statistics that is not based solely on Statistical parameter, parametrized families of probability distributions (common examples of parameters are the mean and variance). Nonparametric statistics is based ...
test Test(s), testing, or TEST may refer to: * Test (assessment), an educational assessment intended to measure the respondents' knowledge or other abilities Arts and entertainment * ''Test'' (2013 film), an American film * ''Test'' (2014 film), ...
of the
null hypothesis In scientific research, the null hypothesis (often denoted ''H''0) is the claim that no difference or relationship exists between two sets of data or variables being analyzed. The null hypothesis is that any experimentally observed difference is d ...
that, for randomly selected values ''X'' and ''Y'' from two populations, the probability of ''X'' being greater than ''Y'' is equal to the probability of ''Y'' being greater than ''X''. Nonparametric tests used on two ''dependent'' samples are the
Sign test The sign test is a statistical method to test for consistent differences between pairs of observations, such as the weight of subjects before and after treatment. Given pairs of observations (such as weight pre- and post-treatment) for each subject ...
and the
Wilcoxon signed-rank test The Wilcoxon signed-rank test is a non-parametric statistical hypothesis test used either to test the location of a population based on a sample of data, or to compare the locations of two populations using two matched samples., p. 350 The one-sa ...
.


Assumptions and formal statement of hypotheses

Although Mann and Whitney developed the Mann–Whitney ''U'' test under the assumption of
continuous Continuity or continuous may refer to: Mathematics * Continuity (mathematics), the opposing concept to discreteness; common examples include ** Continuous probability distribution or random variable in probability and statistics ** Continuous ...
responses with the alternative hypothesis being that one distribution is stochastically greater than the other, there are many other ways to formulate the
null Null may refer to: Science, technology, and mathematics Computing * Null (SQL) (or NULL), a special marker and keyword in SQL indicating that something has no value * Null character, the zero-valued ASCII character, also designated by , often use ...
and alternative hypotheses such that the Mann–Whitney ''U'' test will give a valid test. A very general formulation is to assume that: # All the observations from both groups are
independent Independent or Independents may refer to: Arts, entertainment, and media Artist groups * Independents (artist group), a group of modernist painters based in the New Hope, Pennsylvania, area of the United States during the early 1930s * Independ ...
of each other, # The responses are at least ordinal (i.e., one can at least say, of any two observations, which is the greater), # Under the null hypothesis ''H''0, the distributions of both populations are identical. # The alternative hypothesis ''H''1 is that the distributions are not identical. Under the general formulation, the test is only
consistent In classical deductive logic, a consistent theory is one that does not lead to a logical contradiction. The lack of contradiction can be defined in either semantic or syntactic terms. The semantic definition states that a theory is consistent ...
when the following occurs under ''H''1: # The probability of an observation from population ''X'' exceeding an observation from population ''Y'' is different (larger, or smaller) than the probability of an observation from ''Y'' exceeding an observation from ''X; i.e.,'' or . Under more strict assumptions than the general formulation above, e.g., if the responses are assumed to be continuous and the alternative is restricted to a shift in location, i.e., , we can interpret a significant Mann–Whitney ''U'' test as showing a difference in medians. Under this location shift assumption, we can also interpret the Mann–Whitney ''U'' test as assessing whether the Hodges–Lehmann estimate of the difference in central tendency between the two populations differs from zero. The Hodges–Lehmann estimate for this two-sample problem is the median of all possible differences between an observation in the first sample and an observation in the second sample. Otherwise, if both the dispersions and shapes of the distribution of both samples differ, the Mann-Whitney ''U'' test fails a test of medians. It is possible to show examples where medians are numerically equal while the test rejects the null hypothesis with a small p-value. The Mann–Whitney ''U'' test / Wilcoxon rank-sum test is not the same as the Wilcoxon ''signed''-rank test, although both are nonparametric and involve summation of ranks. The Mann–Whitney ''U'' test is applied to independent samples. The Wilcoxon signed-rank test is applied to matched or dependent samples.


U statistic

Let X_1,\ldots, X_n be an i.i.d. sample from X, and Y_1,\ldots, Y_m an i.i.d. sample from Y, and both samples independent of each other. The corresponding Mann-Whitney U statistic is defined as: :U = \sum_^n \sum_^m S(X_i,Y_j), with :S(X,Y) = \begin 1, &\text X > Y, \\ \tfrac, &\text X = Y, \\ 0, &\text X < Y. \end


Area-under-curve (AUC) statistic for ROC curves

The ''U'' statistic is equivalent to the area under the
receiver operating characteristic A receiver operating characteristic curve, or ROC curve, is a graphical plot that illustrates the diagnostic ability of a binary classifier system as its discrimination threshold is varied. The method was originally developed for operators of m ...
curve ( AUC) that can be readily calculated. :\mathrm_1 = Note that this is the same definition as the common language effect size from the section above. i.e.: the probability that a classifier will rank a randomly chosen positive instance higher than a randomly chosen negative one (assuming 'positive' ranks higher than 'negative').Fawcett, Tom (2006);
An introduction to ROC analysis
', Pattern Recognition Letters, 27, 861–874.
Because of its probabilistic form, the ''U'' statistic can be generalised to a measure of a classifier's separation power for more than two classes: :M = \sum \mathrm_ Where ''c'' is the number of classes, and the ''R''''k'',''ℓ'' term of AUC''k'',''ℓ'' considers only the ranking of the items belonging to classes ''k'' and ''ℓ'' (i.e., items belonging to all other classes are ignored) according to the classifier's estimates of the probability of those items belonging to class ''k''. AUC''k'',''k'' will always be zero but, unlike in the two-class case, generally , which is why the ''M'' measure sums over all (''k'',''ℓ'') pairs, in effect using the average of AUC''k'',''ℓ'' and AUC''ℓ'',''k''.


Calculations

The test involves the calculation of a statistic, usually called ''U'', whose distribution under the
null hypothesis In scientific research, the null hypothesis (often denoted ''H''0) is the claim that no difference or relationship exists between two sets of data or variables being analyzed. The null hypothesis is that any experimentally observed difference is d ...
is known. In the case of small samples, the distribution is tabulated, but for sample sizes above ~20, approximation using the
normal distribution In statistics, a normal distribution or Gaussian distribution is a type of continuous probability distribution for a real-valued random variable. The general form of its probability density function is : f(x) = \frac e^ The parameter \mu ...
is fairly good. Some books tabulate statistics equivalent to ''U'', such as the sum of
rank Rank is the relative position, value, worth, complexity, power, importance, authority, level, etc. of a person or object within a ranking, such as: Level or position in a hierarchical organization * Academic rank * Diplomatic rank * Hierarchy * ...
s in one of the samples, rather than ''U'' itself. The Mann–Whitney ''U'' test is included in most modern statistical packages. It is also easily calculated by hand, especially for small samples. There are two ways of doing this. Method one: For comparing two small sets of observations, a direct method is quick, and gives insight into the meaning of the ''U'' statistic, which corresponds to the number of wins out of all pairwise contests (see the tortoise and hare example under Examples below). For each observation in one set, count the number of times this first value wins over any observations in the other set (the other value loses if this first is larger). Count 0.5 for any ties. The sum of wins and ties is ''U'' (i.e.: U_1) for the first set. ''U'' for the other set is the converse (i.e.: U_2). Method two: For larger samples: # Assign numeric ranks to all the observations (put the observations from both groups to one set), beginning with 1 for the smallest value. Where there are groups of tied values, assign a rank equal to the midpoint of unadjusted rankings (e.g., the ranks of are , where the unadjusted ranks would be ). # Now, add up the ranks for the observations which came from sample 1. The sum of ranks in sample 2 is now determined, since the sum of all the ranks equals where ''N'' is the total number of observations. # ''U'' is then given by: :::U_1=R_1 - \,\! ::where ''n''1 is the sample size for sample 1, and ''R''1 is the sum of the ranks in sample 1. ::Note that it doesn't matter which of the two samples is considered sample 1. An equally valid formula for ''U'' is :::U_2= R_2 - \,\! ::The smaller value of ''U''1 and ''U''2 is the one used when consulting significance tables. The sum of the two values is given by :::U_1 + U_2 = R_1 - + R_2 - . \,\! :: Knowing that and , and doing some
algebra Algebra () is one of the broad areas of mathematics. Roughly speaking, algebra is the study of mathematical symbols and the rules for manipulating these symbols in formulas; it is a unifying thread of almost all of mathematics. Elementary ...
, we find that the sum is :::.


Properties

The maximum value of ''U'' is the product of the sample sizes for the two samples (i.e.: U_i = n_1 n_2). In such a case, the "other" ''U'' would be 0.


Examples


Illustration of calculation methods

Suppose that
Aesop Aesop ( or ; , ; c. 620–564 BCE) was a Greek fabulist and storyteller credited with a number of fables now collectively known as ''Aesop's Fables''. Although his existence remains unclear and no writings by him survive, numerous tales c ...
is dissatisfied with his classic experiment in which one
tortoise Tortoises () are reptiles of the family Testudinidae of the order Testudines (Latin: ''tortoise''). Like other turtles, tortoises have a shell to protect from predation and other threats. The shell in tortoises is generally hard, and like oth ...
was found to beat one hare in a race, and decides to carry out a significance test to discover whether the results could be extended to tortoises and hares in general. He collects a sample of 6 tortoises and 6 hares, and makes them all run his race at once. The order in which they reach the finishing post (their rank order, from first to last crossing the finish line) is as follows, writing T for a tortoise and H for a hare: :T H H H H H T T T T T H What is the value of ''U''? * Using the direct method, we take each tortoise in turn, and count the number of hares it beats, getting 6, 1, 1, 1, 1, 1, which means that . Alternatively, we could take each hare in turn, and count the number of tortoises it beats. In this case, we get 5, 5, 5, 5, 5, 0, so . Note that the sum of these two values for , which is . * Using the indirect method: : rank the animals by the time they take to complete the course, so give the first animal home rank 12, the second rank 11, and so forth. : the sum of the ranks achieved by the tortoises is . :: Therefore (same as method one). :: The sum of the ranks achieved by the hares is , leading to .


Example statement of results

In reporting the results of a Mann–Whitney ''U'' test, it is important to state: *A measure of the central tendencies of the two groups (means or medians; since the Mann–Whitney ''U'' test is an ordinal test, medians are usually recommended) *The value of ''U'' (perhaps with some measure of effect size, such as common language effect size or rank-biserial correlation). *The sample sizes *The significance level. In practice some of this information may already have been supplied and common sense should be used in deciding whether to repeat it. A typical report might run, :"Median latencies in groups E and C were 153 and 247 ms; the distributions in the two groups differed significantly (Mann–Whitney , , two-tailed)." A statement that does full justice to the statistical status of the test might run, :"Outcomes of the two treatments were compared using the Wilcoxon–Mann–Whitney two-sample rank-sum test. The treatment effect (difference between treatments) was quantified using the Hodges–Lehmann (HL) estimator, which is consistent with the Wilcoxon test. This estimator (HLΔ) is the median of all possible differences in outcomes between a subject in group B and a subject in group A. A non-parametric 0.95 confidence interval for HLΔ accompanies these estimates as does ρ, an estimate of the probability that a randomly chosen subject from population B has a higher weight than a randomly chosen subject from population A. The median uartilesweight for subjects on treatment A and B respectively are 147 21, 177and 151 30, 180kg. Treatment A decreased weight by HLΔ = 5 kg (0.95 CL , 9kg, , )." However it would be rare to find such an extensive report in a document whose major topic was not statistical inference.


Normal approximation and tie correction

For large samples, ''U'' is approximately normally distributed. In that case, the standardized value :z = \frac, \, where ''m''''U'' and ''σ''''U'' are the mean and standard deviation of ''U'', is approximately a standard normal deviate whose significance can be checked in tables of the normal distribution. ''m''''U'' and ''σ''''U'' are given by :m_U = \frac, \, and :\sigma_U=\sqrt. \, The formula for the standard deviation is more complicated in the presence of tied ranks. If there are ties in ranks, ''σ'' should be adjusted as follows: : \sigma_\text=\sqrt,\, where the left side is simply the variance and the right side is the adjustment for ties, ''t''''k'' is the number of ties for the ''k''th rank, and ''K'' is the total number of unique ranks with ties. A more computationally-efficient form with factored out is : \sigma_\text=\sqrt, where . If the number of ties is small (and especially if there are no large tie bands) ties can be ignored when doing calculations by hand. The computer statistical packages will use the correctly adjusted formula as a matter of routine. Note that since , the mean used in the normal approximation is the mean of the two values of ''U''. Therefore, the absolute value of the ''z''-statistic calculated will be same whichever value of ''U'' is used.


Effect sizes

It is a widely recommended practice for scientists to report an
effect size In statistics, an effect size is a value measuring the strength of the relationship between two variables in a population, or a sample-based estimate of that quantity. It can refer to the value of a statistic calculated from a sample of data, the ...
for an inferential test.


Proportion of concordance out of all pairs

The following three measures are equivalent.


Common language effect size

One method of reporting the effect size for the Mann–Whitney ''U'' test is with ''f'', the common language effect size. As a sample statistic, the common language effect size is computed by forming all possible pairs between the two groups, then finding the proportion of pairs that support a direction (say, that items from group 1 are larger than items from group 2). To illustrate, in a study with a sample of ten hares and ten tortoises, the total number of ordered pairs is ten times ten or 100 pairs of hares and tortoises. Suppose the results show that the hare ran faster than the tortoise in 90 of the 100 sample pairs; in that case, the sample common language effect size is 90%. This sample value is an unbiased estimator of the population value, so the sample suggests that the best estimate of the common language effect size in the population is 90%. The relationship between ''f'' and the Mann–Whitney ''U'' (specifically U_1) is as follows: : f = \, This is the same as the area under the curve (AUC) for the ROC curve.


''ρ'' statistic

A statistic called ''ρ'' that is linearly related to ''U'' and widely used in studies of categorization (
discrimination learning Discrimination learning is defined in psychology as the ability to respond differently to different stimuli. This type of learning is used in studies regarding operant and classical conditioning. Operant conditioning involves the modification of a ...
involving
concept Concepts are defined as abstract ideas. They are understood to be the fundamental building blocks of the concept behind principles, thoughts and beliefs. They play an important role in all aspects of cognition. As such, concepts are studied by ...
s), and elsewhere, is calculated by dividing ''U'' by its maximum value for the given sample sizes, which is simply . ''ρ'' is thus a non-parametric measure of the overlap between two distributions; it can take values between 0 and 1, and it is an estimate of , where ''X'' and ''Y'' are randomly chosen observations from the two distributions. Both extreme values represent complete separation of the distributions, while a ''ρ'' of 0.5 represents complete overlap. The usefulness of the ''ρ'' statistic can be seen in the case of the odd example used above, where two distributions that were significantly different on a Mann–Whitney ''U'' test nonetheless had nearly identical medians: the ρ value in this case is approximately 0.723 in favour of the hares, correctly reflecting the fact that even though the median tortoise beat the median hare, the hares collectively did better than the tortoises collectively.


Rank-biserial correlation

A method of reporting the effect size for the Mann–Whitney ''U'' test is with a measure of
rank correlation In statistics, a rank correlation is any of several statistics that measure an ordinal association—the relationship between rankings of different ordinal variables or different rankings of the same variable, where a "ranking" is the assignment o ...
known as the rank-biserial correlation. Edward Cureton introduced and named the measure. Like other correlational measures, the rank-biserial correlation can range from minus one to plus one, with a value of zero indicating no relationship. There is a simple difference formula to compute the rank-biserial correlation from the common language effect size: the correlation is the difference between the proportion of pairs favorable to the hypothesis (''f'') minus its complement (i.e.: the proportion that is unfavorable (''u'')). This simple difference formula is just the difference of the common language effect size of each group, and is as follows: :r = f - u For example, consider the example where hares run faster than tortoises in 90 of 100 pairs. The common language effect size is 90%, so the rank-biserial correlation is 90% minus 10%, and the rank-biserial . An alternative formula for the rank-biserial can be used to calculate it from the Mann–Whitney ''U'' (either U_1 or U_2) and the sample sizes of each group: :r = f - (1 - f) = 2 f - 1 = - 1 = 1 - This formula is useful when the data are not available, but when there is a published report, because ''U'' and the sample sizes are routinely reported. Using the example above with 90 pairs that favor the hares and 10 pairs that favor the tortoise, ''U2'' is the smaller of the two, so . This formula then gives , which is the same result as with the simple difference formula above.


Relation to other tests


Comparison to Student's ''t''-test

The Mann–Whitney ''U'' test tests a null hypothesis of that the probability distribution of a randomly drawn observation from one group is the same as the probability distribution of a randomly drawn observation from the other group against an alternative that those distributions are not equal (see Mann–Whitney U test#Assumptions and formal statement of hypotheses). In contrast, a
t-test A ''t''-test is any statistical hypothesis test in which the test statistic follows a Student's ''t''-distribution under the null hypothesis. It is most commonly applied when the test statistic would follow a normal distribution if the value of ...
tests a null hypothesis of equal means in two groups against an alternative of unequal means. Hence, except in special cases, the Mann–Whitney U test and the t-test do not test the same hypotheses and should be compared with this in mind. ;Ordinal data: The Mann–Whitney ''U'' test is preferable to the ''t''-test when the data are ordinal but not interval scaled, in which case the spacing between adjacent values of the scale cannot be assumed to be constant. ;Robustness:As it compares the sums of ranks,Motulsky, Harvey J.; ''Statistics Guide'', San Diego, CA: GraphPad Software, 2007, p. 123 the Mann–Whitney ''U'' test is less likely than the ''t''-test to spuriously indicate significance because of the presence of outliers. However, the Mann-Whitney ''U'' test may have worse
type I error In statistical hypothesis testing, a type I error is the mistaken rejection of an actually true null hypothesis (also known as a "false positive" finding or conclusion; example: "an innocent person is convicted"), while a type II error is the fa ...
control when data are both heteroscedastic and non-normal. ;Efficiency:When normality holds, the Mann–Whitney ''U'' test has an (asymptotic) efficiency of 3/ or about 0.95 when compared to the ''t''-test.Lehamnn, Erich L.; ''Elements of Large Sample Theory'', Springer, 1999, p. 176 For distributions sufficiently far from normal and for sufficiently large sample sizes, the Mann–Whitney ''U'' test is considerably more efficient than the ''t''.Conover, William J.
''Practical Nonparametric Statistics''
John Wiley & Sons, 1980 (2nd Edition), pp. 225–226
This comparison in efficiency, however, should be interpreted with caution, as Mann-Whitney and the t-test do not test the same quantities. If, for example, a difference of group means is of primary interest, Mann-Whitney is not an appropriate test. The Mann–Whitney ''U'' test will give very similar results to performing an ordinary parametric two-sample ''t''-test on the rankings of the data.


Different distributions

The Mann-Whitney U test is not valid for testing the null hypothesis P(Y>X)+0.5P(Y=X)= 0.5 against the alternative hypothesis P(Y>X)+0.5P(Y=X)\neq 0.5), without assuming that the distributions are the same under the null hypothesis (i.e., assuming F_1=F_2). To test between those hypotheses, better tests are available. Among those are the Brunner-Munzel and the Fligner–Policello test. Specifically, under the more general null hypothesis P(Y>X)+0.5P(Y=X)= 0.5, the Mann–Whitney ''U'' test can have inflated type I error rates even in large samples (especially if the variances of two populations are unequal and the sample sizes are different), a problem the better alternatives solve. As a result, it has been suggested to use one of the alternatives (specifically the Brunner-Munzel test) if it cannot be assumed that the distributions are equal under the null hypothesis.


Alternatives

If one desires a simple shift interpretation, the Mann–Whitney ''U'' test should ''not'' be used when the distributions of the two samples are very different, as it can give erroneous interpretation of significant results. In that situation, the unequal variances version of the ''t''-test may give more reliable results. Similarly, some authors (e.g., Conover) suggest transforming the data to ranks (if they are not already ranks) and then performing the ''t''-test on the transformed data, the version of the ''t''-test used depending on whether or not the population variances are suspected to be different. Rank transformations do not preserve variances, but variances are recomputed from samples after rank transformations. The
Brown–Forsythe test The Brown–Forsythe test is a statistical test for the equality of group variances based on performing an Analysis of Variance (ANOVA) on a transformation of the response variable. When a one-way ANOVA is performed, samples are assumed to have ...
has been suggested as an appropriate non-parametric equivalent to the ''F''-test for equal variances. A more powerful test is the Brunner-Munzel test, outperforming the Mann-Whitney ''U'' test in case of violated assumption of exchangeability. The Mann-Whitney ''U'' test is a special case of the Proportional odds model, allowing for covariate-adjustment. See also
Kolmogorov–Smirnov test In statistics, the Kolmogorov–Smirnov test (K–S test or KS test) is a nonparametric test of the equality of continuous (or discontinuous, see Section 2.2), one-dimensional probability distributions that can be used to compare a sample wit ...
.


Related test statistics


Kendall's tau

The Mann–Whitney ''U'' test is related to a number of other non-parametric statistical procedures. For example, it is equivalent to
Kendall's tau In statistics, the Kendall rank correlation coefficient, commonly referred to as Kendall's τ coefficient (after the Greek letter τ, tau), is a statistic used to measure the ordinal association between two measured quantities. A τ test is a ...
correlation coefficient if one of the variables is binary (that is, it can only take two values).


Software implementations

In many software packages, the Mann–Whitney ''U'' test (of the hypothesis of equal distributions against appropriate alternatives) has been poorly documented. Some packages incorrectly treat ties or fail to document asymptotic techniques (e.g., correction for continuity). A 2000 review discussed some of the following packages: *
MATLAB MATLAB (an abbreviation of "MATrix LABoratory") is a proprietary multi-paradigm programming language and numeric computing environment developed by MathWorks. MATLAB allows matrix manipulations, plotting of functions and data, implementa ...
ha
ranksum
in its Statistics Toolbox. * R's statistics base-package implements the tes

in its "stats" package. * The R packag

will calculate the z statistic for a Wilcoxon two-sample, paired, or one-sample test. * SAS implements the test in its PROC NPAR1WAY procedure. *
Python (programming language) Python is a high-level, general-purpose programming language. Its design philosophy emphasizes code readability with the use of significant indentation. Python is dynamically-typed and garbage-collected. It supports multiple programming p ...
has an implementation of this test provided by
SciPy SciPy (pronounced "sigh pie") is a free and open-source Python library used for scientific computing and technical computing. SciPy contains modules for optimization, linear algebra, integration, interpolation, special functions, FFT, ...
*
SigmaStat SigmaStat is a statistical software package, which was originally developed by Jandel Scientific Software in the 1980s. As of October 1996, Systat Software is now based in San Jose, California San Jose, officially San José (; ; ), is a maj ...
(SPSS Inc., Chicago, IL) * SYSTAT (SPSS Inc., Chicago, IL) *
Java (programming language) Java is a high-level, class-based, object-oriented programming language that is designed to have as few implementation dependencies as possible. It is a general-purpose programming language intended to let programmers ''write once, run anyw ...
has an implementation of this test provided by Apache Commons *
Julia (programming language) Julia is a high-level, dynamic programming language. Its features are well suited for numerical analysis and computational science. Distinctive aspects of Julia's design include a type system with parametric polymorphism in a dynamic program ...
has implementations of this test through several packages. In the package HypothesisTests.jl, this is found as pvalue(MannWhitneyUTest(X, Y)) * JMP (SAS Institute Inc., Cary, NC) *
S-Plus S-PLUS is a commercial implementation of the S programming language sold by TIBCO Software Inc. It features object-oriented programming capabilities and advanced analytical algorithms. Due to the increasing popularity of the open source S succ ...
(MathSoft, Inc., Seattle, WA) * STATISTICA (StatSoft, Inc., Tulsa, OK) * UNISTAT (Unistat Ltd, London) *
SPSS SPSS Statistics is a statistical software suite developed by IBM for data management, advanced analytics, multivariate analysis, business intelligence, and criminal investigation. Long produced by SPSS Inc., it was acquired by IBM in 2009. C ...
(SPSS Inc, Chicago) * StatsDirect (StatsDirect Ltd, Manchester, UK) implement
all common variants
* Stata (Stata Corporation, College Station, TX) implements the test in it
ranksum
command. *
StatXact StatXact is a statistical software package for analyzing data using exact statistics. It calculates exact p-values and confidence intervals for contingency tables and non-parametric procedures. It is marketed by Cytel Inc. References * E ...
(Cytel Software Corporation, Cambridge, Massachusetts) *
PSPP PSPP is a free software application for analysis of sampled data, intended as a free alternative for IBM SPSS Statistics. It has a graphical user interface and conventional command-line interface. It is written in C and uses GNU Scientific Lib ...
implements the test in it
WILCOXON
function. *
KNIME KNIME (), the Konstanz Information Miner, is a free and open-source data analytics, reporting and integration platform. KNIME integrates various components for machine learning and data mining through its modular data pipelining "Building Blocks ...
implements the test in it
Wilcoxon-Mann-Whitney Test
node.


History

The statistic appeared in a 1914 article by the German Gustav Deuchler (with a missing term in the variance). In a single paper in 1945, Frank Wilcoxon proposed both the one-sample signed rank and the two-sample rank sum test, in a test of significance with a point null-hypothesis against its complementary alternative (that is, equal versus not equal). However, he only tabulated a few points for the equal-sample size case in that paper (though in a later paper he gave larger tables). A thorough analysis of the statistic, which included a recurrence allowing the computation of tail probabilities for arbitrary sample sizes and tables for sample sizes of eight or less appeared in the article by
Henry Mann Henry Berthold Mann (27 October 1905, Vienna – 1 February 2000, Tucson) was a professor of mathematics and statistics at the Ohio State University. Mann proved the Schnirelmann-Landau conjecture in number theory, and as a result earned the 1946 ...
and his student Donald Ransom Whitney in 1947. This article discussed alternative hypotheses, including a
stochastic ordering In probability theory and statistics, a stochastic order quantifies the concept of one random variable being "bigger" than another. These are usually partial orders, so that one random variable A may be neither stochastically greater than, less tha ...
(where the cumulative distribution functions satisfied the pointwise inequality ). This paper also computed the first four moments and established the limiting normality of the statistic under the null hypothesis, so establishing that it is asymptotically distribution-free.


See also

* Lepage test * Cucconi test *
Kolmogorov–Smirnov test In statistics, the Kolmogorov–Smirnov test (K–S test or KS test) is a nonparametric test of the equality of continuous (or discontinuous, see Section 2.2), one-dimensional probability distributions that can be used to compare a sample wit ...
*
Wilcoxon signed-rank test The Wilcoxon signed-rank test is a non-parametric statistical hypothesis test used either to test the location of a population based on a sample of data, or to compare the locations of two populations using two matched samples., p. 350 The one-sa ...
*
Kruskal–Wallis one-way analysis of variance The Kruskal–Wallis test by ranks, Kruskal–Wallis ''H'' testBrunner-Munzel test * Proportional odds model


Notes


References

* * * * * * *


External links

* Table of critical values of ''U'
(pdf)


for ''U'' and its significance
Brief guide by experimental psychologist Karl L. Weunsch
– Nonparametric effect size estimators (Copyright 2015 by Karl L. Weunsch) {{DEFAULTSORT:Mann-Whitney U Statistical tests Nonparametric statistics U-statistics