Kendall rank correlation coefficient
   HOME

TheInfoList



OR:

In statistics, the Kendall rank correlation coefficient, commonly referred to as Kendall's Ï„ coefficient (after the Greek letter Ï„, tau), is a statistic used to measure the ordinal association between two measured quantities. A Ï„ test is a
non-parametric Nonparametric statistics is the branch of statistics that is not based solely on parametrized families of probability distributions (common examples of parameters are the mean and variance). Nonparametric statistics is based on either being distri ...
hypothesis test A statistical hypothesis test is a method of statistical inference used to decide whether the data at hand sufficiently support a particular hypothesis. Hypothesis testing allows us to make probabilistic statements about population parameters. ...
for statistical dependence based on the Ï„ coefficient. It is a measure of rank correlation: the similarity of the orderings of the data when ranked by each of the quantities. It is named after Maurice Kendall, who developed it in 1938, though
Gustav Fechner Gustav Theodor Fechner (; ; 19 April 1801 – 18 November 1887) was a German physicist, philosopher, and experimental psychologist. A pioneer in experimental psychology and founder of psychophysics (techniques for measuring the mind), he ins ...
had proposed a similar measure in the context of
time series In mathematics, a time series is a series of data points indexed (or listed or graphed) in time order. Most commonly, a time series is a sequence taken at successive equally spaced points in time. Thus it is a sequence of discrete-time data. Ex ...
in 1897. Intuitively, the Kendall correlation between two variables will be high when observations have a similar (or identical for a correlation of 1)
rank Rank is the relative position, value, worth, complexity, power, importance, authority, level, etc. of a person or object within a ranking, such as: Level or position in a hierarchical organization * Academic rank * Diplomatic rank * Hierarchy * ...
(i.e. relative position label of the observations within the variable: 1st, 2nd, 3rd, etc.) between the two variables, and low when observations have a dissimilar (or fully different for a correlation of −1) rank between the two variables. Both Kendall's \tau and Spearman's \rho can be formulated as special cases of a more general correlation coefficient.


Definition

Let (x_1,y_1), ..., (x_n,y_n) be a set of observations of the joint random variables ''X'' and ''Y'', such that all the values of (x_i) and (y_i) are unique (ties are neglected for simplicity). Any pair of observations (x_i,y_i) and (x_j,y_j), where i < j, are said to be ''
concordant Concordance may refer to: * Agreement (linguistics), a form of cross-reference between different parts of a sentence or phrase * Bible concordance, an alphabetical listing of terms in the Bible * Concordant coastline, in geology, where beds, or ...
'' if the sort order of (x_i,x_j) and ''(y_i,y_j)'' agrees: that is, if either both x_i>x_j and y_i>y_j holds or both x_i and y_i; otherwise they are said to be ''discordant''. The Kendall Ï„ coefficient is defined as: : \tau = \frac = 1- \frac . Where = is the binomial coefficient for the number of ways to choose two items from n items.


Properties

The
denominator A fraction (from la, fractus, "broken") represents a part of a whole or, more generally, any number of equal parts. When spoken in everyday English, a fraction describes how many parts of a certain size there are, for example, one-half, eight ...
is the total number of pair combinations, so the coefficient must be in the range −1 â‰¤ ''Ï„'' â‰¤ 1. * If the agreement between the two rankings is perfect (i.e., the two rankings are the same) the coefficient has value 1. * If the disagreement between the two rankings is perfect (i.e., one ranking is the reverse of the other) the coefficient has value −1. * If ''X'' and ''Y'' are
independent Independent or Independents may refer to: Arts, entertainment, and media Artist groups * Independents (artist group), a group of modernist painters based in the New Hope, Pennsylvania, area of the United States during the early 1930s * Independ ...
and not constant, then the expectation of the coefficient is zero. * An explicit expression for Kendall's rank coefficient is \tau= \frac\sum_ \sgn(x_i-x_j)\sgn(y_i-y_j).


Hypothesis test

The Kendall rank coefficient is often used as a
test statistic A test statistic is a statistic (a quantity derived from the sample) used in statistical hypothesis testing.Berger, R. L.; Casella, G. (2001). ''Statistical Inference'', Duxbury Press, Second Edition (p.374) A hypothesis test is typically specifi ...
in a statistical hypothesis test to establish whether two variables may be regarded as statistically dependent. This test is
non-parametric Nonparametric statistics is the branch of statistics that is not based solely on parametrized families of probability distributions (common examples of parameters are the mean and variance). Nonparametric statistics is based on either being distri ...
, as it does not rely on any assumptions on the distributions of ''X'' or ''Y'' or the distribution of (''X'',''Y''). Under the
null hypothesis In scientific research, the null hypothesis (often denoted ''H''0) is the claim that no difference or relationship exists between two sets of data or variables being analyzed. The null hypothesis is that any experimentally observed difference is d ...
of independence of ''X'' and ''Y'', the
sampling distribution In statistics, a sampling distribution or finite-sample distribution is the probability distribution of a given random-sample-based statistic. If an arbitrarily large number of samples, each involving multiple observations (data points), were sep ...
of ''Ï„'' has an expected value of zero. The precise distribution cannot be characterized in terms of common distributions, but may be calculated exactly for small samples; for larger samples, it is common to use an approximation to the normal distribution, with mean zero and variance :\frac.


Accounting for ties

A pair \ is said to be ''tied'' if x_i=x_j or y_i=y_j; a tied pair is neither concordant nor discordant. When tied pairs arise in the data, the coefficient may be modified in a number of ways to keep it in the range ˆ’1, 1


Tau-a

The Tau-a statistic tests the strength of association of the cross tabulations. Both variables have to be ordinal. Tau-a will not make any adjustment for ties. It is defined as: : \tau_A = \frac where ''n''''c'', ''n''''d'' and ''n''''0'' are defined as in the next section.


Tau-b

The Tau-b statistic, unlike Tau-a, makes adjustments for ties. Values of Tau-b range from −1 (100% negative association, or perfect inversion) to +1 (100% positive association, or perfect agreement). A value of zero indicates the absence of association. The Kendall Tau-b coefficient is defined as: : \tau_B = \frac where : \begin n_0 & = n(n-1)/2\\ n_1 & = \sum_i t_i (t_i-1)/2 \\ n_2 & = \sum_j u_j (u_j-1)/2 \\ n_c & = \text \\ n_d & = \text \\ t_i & = \text i^\text \text \\ u_j & = \text j^\text \text \end A simple algorithm developed in BASIC computes Tau-b coefficient using an alternative formula. Be aware that some statistical packages, e.g. SPSS, use alternative formulas for computational efficiency, with double the 'usual' number of concordant and discordant pairs.


Tau-c

Tau-c (also called Stuart-Kendall Tau-c) is more suitable than Tau-b for the analysis of data based on non-square (i.e. rectangular) contingency tables. So use Tau-b if the underlying scale of both variables has the same number of possible values (before ranking) and Tau-c if they differ. For instance, one variable might be scored on a 5-point scale (very good, good, average, bad, very bad), whereas the other might be based on a finer 10-point scale. The Kendall Tau-c coefficient is defined as: : \tau_C = \frac where : \begin n_c & = \text \\ n_d & = \text \\ r & = \text \\ c & = \text \\ m & = \min(r, c) \end


Significance tests

When two quantities are statistically independent, the distribution of \tau is not easily characterizable in terms of known distributions. However, for \tau_A the following statistic, z_A, is approximately distributed as a standard normal when the variables are statistically independent: : z_A = Thus, to test whether two variables are statistically dependent, one computes z_A, and finds the cumulative probability for a standard normal distribution at -, z_A, . For a 2-tailed test, multiply that number by two to obtain the ''p''-value. If the ''p''-value is below a given significance level, one rejects the null hypothesis (at that significance level) that the quantities are statistically independent. Numerous adjustments should be added to z_A when accounting for ties. The following statistic, z_B, has the same distribution as the \tau_B distribution, and is again approximately equal to a standard normal distribution when the quantities are statistically independent: :z_B = where :\begin v & = & (v_0 - v_t - v_u)/18 + v_1 + v_2 \\ v_0 & = & n (n-1) (2n+5) \\ v_t & = & \sum_i t_i (t_i-1) (2 t_i+5)\\ v_u & = & \sum_j u_j (u_j-1)(2 u_j+5) \\ v_1 & = & \sum_i t_i (t_i-1) \sum_j u_j (u_j-1) / (2n(n-1)) \\ v_2 & = & \sum_i t_i (t_i-1) (t_i-2) \sum_j u_j (u_j-1) (u_j-2) / (9 n (n-1) (n-2)) \end This is sometimes referred to as the Mann-Kendall test.


Algorithms

The direct computation of the numerator n_c - n_d, involves two nested iterations, as characterized by the following pseudocode: numer := 0 for i := 2..N do for j := 1..(i − 1) do numer := numer + sign(x − x × sign(y − y return numer Although quick to implement, this algorithm is O(n^2) in complexity and becomes very slow on large samples. A more sophisticated algorithm built upon the
Merge Sort In computer science, merge sort (also commonly spelled as mergesort) is an efficient, general-purpose, and comparison-based sorting algorithm. Most implementations produce a stable sort, which means that the order of equal elements is the same ...
algorithm can be used to compute the numerator in O(n \cdot \log) time. Begin by ordering your data points sorting by the first quantity, x, and secondarily (among ties in x) by the second quantity, y. With this initial ordering, y is not sorted, and the core of the algorithm consists of computing how many steps a Bubble Sort would take to sort this initial y. An enhanced
Merge Sort In computer science, merge sort (also commonly spelled as mergesort) is an efficient, general-purpose, and comparison-based sorting algorithm. Most implementations produce a stable sort, which means that the order of equal elements is the same ...
algorithm, with O(n \log n) complexity, can be applied to compute the number of swaps, S(y), that would be required by a Bubble Sort to sort y_i. Then the numerator for \tau is computed as: :n_c-n_d = n_0 - n_1 - n_2 + n_3 - 2 S(y), where n_3 is computed like n_1 and n_2, but with respect to the joint ties in x and y. A
Merge Sort In computer science, merge sort (also commonly spelled as mergesort) is an efficient, general-purpose, and comparison-based sorting algorithm. Most implementations produce a stable sort, which means that the order of equal elements is the same ...
partitions the data to be sorted, y into two roughly equal halves, y_\mathrm and y_\mathrm, then sorts each half recursive, and then merges the two sorted halves into a fully sorted vector. The number of Bubble Sort swaps is equal to: :S(y) = S(y_\mathrm) + S(y_\mathrm) + M(Y_\mathrm,Y_\mathrm) where Y_\mathrm and Y_\mathrm are the sorted versions of y_\mathrm and y_\mathrm, and M(\cdot,\cdot) characterizes the Bubble Sort swap-equivalent for a merge operation. M(\cdot,\cdot) is computed as depicted in the following pseudo-code: function M(L ..n R ..m is i := 1 j := 1 nSwaps := 0 while i ≤ n and j ≤ m do if R < L then nSwaps := nSwaps + n − i + 1 j := j + 1 else i := i + 1 return nSwaps A side effect of the above steps is that you end up with both a sorted version of x and a sorted version of y. With these, the factors t_i and u_j used to compute \tau_B are easily obtained in a single linear-time pass through the sorted arrays.


Software Implementations

* R's statistics base-package implements the tes
">cor.test(x, y, method = "kendall")
in its "stats" package (also cor(x, y, method = "kendall") will work, but the latter does not return the p-value). * For
Python Python may refer to: Snakes * Pythonidae, a family of nonvenomous snakes found in Africa, Asia, and Australia ** ''Python'' (genus), a genus of Pythonidae found in Africa and Asia * Python (mythology), a mythical serpent Computing * Python (pro ...
, the
SciPy SciPy (pronounced "sigh pie") is a free and open-source Python library used for scientific computing and technical computing. SciPy contains modules for optimization, linear algebra, integration, interpolation, special functions, FFT, ...
library implements the computation of \tau i
scipy.stats.kendalltau


See also

* Correlation * Kendall tau distance * Kendall's W * Spearman's rank correlation coefficient *
Goodman and Kruskal's gamma In statistics, Goodman and Kruskal's gamma is a measure of rank correlation, i.e., the similarity of the orderings of the data when ranked by each of the quantities. It measures the strength of association of the cross tabulated data when both v ...
* Theil–Sen estimator * Mann–Whitney U test - it is equivalent to Kendall's tau correlation coefficient if one of the variables is binary.


References


Further reading

* * * *


External links


Tied rank calculation



Online software: computes Kendall's tau rank correlation
{{DEFAULTSORT:Kendall Tau Rank Correlation Coefficient Covariance and correlation Nonparametric statistics Statistical tests Independence (probability theory) de:Rangkorrelationskoeffizient#Kendalls Tau