HOME

TheInfoList



OR:

In
statistics Statistics (from German language, German: ', "description of a State (polity), state, a country") is the discipline that concerns the collection, organization, analysis, interpretation, and presentation of data. In applying statistics to a s ...
, the Kendall rank correlation coefficient, commonly referred to as Kendall's τ coefficient (after the Greek letter τ, tau), is a
statistic A statistic (singular) or sample statistic is any quantity computed from values in a sample which is considered for a statistical purpose. Statistical purposes include estimating a population parameter, describing a sample, or evaluating a hypot ...
used to measure the ordinal association between two measured quantities. A τ test is a
non-parametric Nonparametric statistics is a type of statistical analysis that makes minimal assumptions about the underlying distribution of the data being studied. Often these models are infinite-dimensional, rather than finite dimensional, as in parametric sta ...
hypothesis test A statistical hypothesis test is a method of statistical inference used to decide whether the data provide sufficient evidence to reject a particular hypothesis. A statistical hypothesis test typically involves a calculation of a test statistic. ...
for statistical dependence based on the τ coefficient. It is a measure of
rank correlation In statistics, a rank correlation is any of several statistics that measure an ordinal association — the relationship between rankings of different ordinal data, ordinal variables or different rankings of the same variable, where a "ranking" is t ...
: the similarity of the orderings of the data when ranked by each of the quantities. It is named after Maurice Kendall, who developed it in 1938, though
Gustav Fechner Gustav Theodor Fechner (; ; 19 April 1801 – 18 November 1887) was a German physicist, philosopher, and experimental psychologist. A pioneer in experimental psychology and founder of psychophysics (techniques for measuring the mind), he inspi ...
had proposed a similar measure in the context of
time series In mathematics, a time series is a series of data points indexed (or listed or graphed) in time order. Most commonly, a time series is a sequence taken at successive equally spaced points in time. Thus it is a sequence of discrete-time data. ...
in 1897. Intuitively, the Kendall correlation between two variables will be high when observations have a similar (or identical for a correlation of 1)
rank A rank is a position in a hierarchy. It can be formally recognized—for example, cardinal, chief executive officer, general, professor—or unofficial. People Formal ranks * Academic rank * Corporate title * Diplomatic rank * Hierarchy ...
(i.e. relative position label of the observations within the variable: 1st, 2nd, 3rd, etc.) between the two variables, and low when observations have a dissimilar (or fully different for a correlation of −1) rank between the two variables. Both Kendall's \tau and Spearman's \rho can be formulated as special cases of a more
general correlation coefficient In statistics, a rank correlation is any of several statistics that measure an ordinal association — the relationship between rankings of different ordinal variables or different rankings of the same variable, where a "ranking" is the assignment ...
. Its notions of concordance and discordance also appear in other areas of statistics, like the
Rand index The Rand index or Rand measure (named after William M. Rand) in statistics, and in particular in data clustering, is a measure of the similarity between two data clusterings. A form of the Rand index may be defined that is adjusted for the chance ...
in
cluster analysis Cluster analysis or clustering is the data analyzing technique in which task of grouping a set of objects in such a way that objects in the same group (called a cluster) are more Similarity measure, similar (in some specific sense defined by the ...
.


Definition

Let (x_1,y_1), ..., (x_n,y_n) be a set of observations of the joint random variables ''X'' and ''Y'', such that all the values of (x_i) and (y_i) are unique. (See the section #Accounting for ties for ways of handling non-unique values.) Any pair of observations (x_i,y_i) and (x_j,y_j), where i < j, are said to be ''
concordant Concordance may refer to: * Agreement (linguistics), a form of cross-reference between different parts of a sentence or phrase * Bible concordance, an alphabetical listing of terms in the Bible * Concordant coastline, in geology, where beds, or la ...
'' if the sort order of (x_i,x_j) and ''(y_i,y_j)'' agrees: that is, if either both x_i>x_j and y_i>y_j holds or both x_i and y_i; otherwise they are said to be ''discordant''. In the absence of ties, the Kendall τ coefficient is defined as: : \tau = \frac = 1- \frac . for i < j < n where = is the
binomial coefficient In mathematics, the binomial coefficients are the positive integers that occur as coefficients in the binomial theorem. Commonly, a binomial coefficient is indexed by a pair of integers and is written \tbinom. It is the coefficient of the t ...
for the number of ways to choose two items from n items. The number of discordant pairs is equal to the inversion number that permutes the y-sequence into the same order as the x-sequence.


Properties

The
denominator A fraction (from , "broken") represents a part of a whole or, more generally, any number of equal parts. When spoken in everyday English, a fraction describes how many parts of a certain size there are, for example, one-half, eight-fifths, thre ...
is the total number of pair combinations, so the coefficient must be in the range −1 ≤ ''τ'' ≤ 1. * If the agreement between the two rankings is perfect (i.e., the two rankings are the same) the coefficient has value 1. * If the disagreement between the two rankings is perfect (i.e., one ranking is the reverse of the other) the coefficient has value −1. * If ''X'' and ''Y'' are
independent random variables Independence is a fundamental notion in probability theory, as in statistics and the theory of stochastic processes. Two events are independent, statistically independent, or stochastically independent if, informally speaking, the occurrence of ...
and not constant, then the expectation of the coefficient is zero. * An explicit expression for Kendall's rank coefficient is \tau= \frac\sum_ \sgn(x_i-x_j)\sgn(y_i-y_j).


Hypothesis test

The Kendall rank coefficient is often used as a
test statistic Test statistic is a quantity derived from the sample for statistical hypothesis testing.Berger, R. L.; Casella, G. (2001). ''Statistical Inference'', Duxbury Press, Second Edition (p.374) A hypothesis test is typically specified in terms of a tes ...
in a
statistical hypothesis test A statistical hypothesis test is a method of statistical inference used to decide whether the data provide sufficient evidence to reject a particular hypothesis. A statistical hypothesis test typically involves a calculation of a test statistic. T ...
to establish whether two variables may be regarded as statistically dependent. This test is
non-parametric Nonparametric statistics is a type of statistical analysis that makes minimal assumptions about the underlying distribution of the data being studied. Often these models are infinite-dimensional, rather than finite dimensional, as in parametric sta ...
, as it does not rely on any assumptions on the distributions of ''X'' or ''Y'' or the distribution of (''X'',''Y''). Under the
null hypothesis The null hypothesis (often denoted ''H''0) is the claim in scientific research that the effect being studied does not exist. The null hypothesis can also be described as the hypothesis in which no relationship exists between two sets of data o ...
of independence of ''X'' and ''Y'', the
sampling distribution In statistics, a sampling distribution or finite-sample distribution is the probability distribution of a given random-sample-based statistic. For an arbitrarily large number of samples where each sample, involving multiple observations (data poi ...
of ''τ'' has an
expected value In probability theory, the expected value (also called expectation, expectancy, expectation operator, mathematical expectation, mean, expectation value, or first Moment (mathematics), moment) is a generalization of the weighted average. Informa ...
of zero. The precise distribution cannot be characterized in terms of common distributions, but may be calculated exactly for small samples; for larger samples, it is common to use an approximation to the
normal distribution In probability theory and statistics, a normal distribution or Gaussian distribution is a type of continuous probability distribution for a real-valued random variable. The general form of its probability density function is f(x) = \frac ...
, with mean zero and variance 2(2n+5)/9n (n-1). Theorem. If the samples are independent, then the variance of \tau_A is given by Var tau_A= 2(2n+5)/9n (n-1).


Case of standard normal distributions

If (x_1, y_1), (x_2, y_2), ..., (x_n, y_n) are independent and identically distributed samples from the same jointly normal distribution with a known
Pearson correlation coefficient In statistics, the Pearson correlation coefficient (PCC) is a correlation coefficient that measures linear correlation between two sets of data. It is the ratio between the covariance of two variables and the product of their standard deviatio ...
r, then the expectation of Kendall rank correlation has a closed-form formula. The name is credited to Richard Greiner (1909) by P. A. P. Moran.


Accounting for ties

A pair \ is said to be ''tied'' if and only if x_ = x_ or y_ = y_ ; a tied pair is neither concordant nor discordant. When tied pairs arise in the data, the coefficient may be modified in a number of ways to keep it in the range ��1, 1


Tau-a

The Tau statistic defined by Kendall in 1938 was retrospectively renamed Tau-a. It represents the strength of positive or negative association of two quantitative or ordinal variables without any adjustment for ties. It is defined as: : \tau_A = \frac where ''n''''c'', ''n''''d'' and ''n''''0'' are defined as in the next section. When ties are present, n_c + n_d < n_0 and, the coefficient can never be equal to +1 or −1. Even a perfect equality of the two variables (X=Y) leads to a Tau-a < 1.


Tau-b

The Tau-b statistic, unlike Tau-a, makes adjustments for ties. This Tau-b was first described by Kendall in 1945 under the name Tau-w as an extension of the original Tau statistic supporting ties. Values of Tau-b range from −1 (100% negative association, or perfect disagreement) to +1 (100% positive association, or perfect agreement). In case of the absence of association, Tau-b is equal to zero. The Kendall Tau-b coefficient is defined as : : \tau_B = \frac where : \begin n_0 & = n(n-1)/2\\ n_1 & = \sum_i t_i (t_i-1)/2 \\ n_2 & = \sum_j u_j (u_j-1)/2 \\ n_c & = \text 0 < i < j < n \text x_i < x_j \text y_i < y_j \text x_i > x_j \text y_i > y_j \\ n_d & = \text 0 < i < j < n \text x_i < x_j \text y_i > y_j \text x_i < x_j \text y_i > y_j \\ t_i & = \text i^\text \text \\ u_j & = \text j^\text \text \end A simple algorithm developed in BASIC computes Tau-b coefficient using an alternative formula. Be aware that some statistical packages, e.g. SPSS, use alternative formulas for computational efficiency, with double the 'usual' number of concordant and discordant pairs.


Tau-c

Tau-c (also called Stuart-Kendall Tau-c) was first defined by Stuart in 1953. Contrary to Tau-b, Tau-c can be equal to +1 or −1 for non-square (i.e. rectangular)
contingency tables In statistics, a contingency table (also known as a cross tabulation or crosstab) is a type of table in a matrix format that displays the multivariate frequency distribution of the variables. They are heavily used in survey research, business int ...
, i.e. when the underlying scales of both variables have different number of possible values. For instance, if the variable X has a continuous uniform distribution between 0 and 100 and Y is a dichotomous variable equal to 1 if X ≥ 50 and 0 if X < 50, the Tau-c statistic of X and Y is equal to 1 while Tau-b is equal to 0.707. A Tau-c equal to 1 can be interpreted as the best possible positive correlation conditional to marginal distributions while a Tau-b equal to 1 can be interpreted as the perfect positive monotonic correlation where the distribution of X conditional to Y has zero variance and the distribution of Y conditional to X has zero variance so that a bijective function f with f(X)=Y exists. The Stuart-Kendall Tau-c coefficient is defined as: : \tau_C = \frac = \tau_A \frac \frac where : \begin n_c & = \text \\ n_d & = \text \\ r & = \text x_i\text \\ c & = \text y_i\text \\ m & = \min(r, c) \end


Significance tests

When two quantities are statistically dependent, the distribution of \tau is not easily characterizable in terms of known distributions. However, for \tau_A the following statistic, z_A, is approximately distributed as a standard normal when the variables are statistically independent: : z_A = where v_0 = n(n-1)(2n+5). Thus, to test whether two variables are statistically dependent, one computes z_A, and finds the cumulative probability for a standard normal distribution at -, z_A, . For a 2-tailed test, multiply that number by two to obtain the ''p''-value. If the ''p''-value is below a given significance level, one rejects the null hypothesis (at that significance level) that the quantities are statistically independent. Numerous adjustments should be added to z_A when accounting for ties. The following statistic, z_B, has the same distribution as the \tau_B distribution, and is again approximately equal to a standard normal distribution when the quantities are statistically independent: :z_B = where :\begin v & = & \frac v_0 - (v_t + v_u)/18 + (v_1 + v_2) \\ v_0 & = & n (n-1) (2n+5) \\ v_t & = & \sum_i t_i (t_i-1) (2 t_i+5)\\ v_u & = & \sum_j u_j (u_j-1)(2 u_j+5) \\ v_1 & = & \sum_i t_i (t_i-1) \sum_j u_j (u_j-1) / (2n(n-1)) \\ v_2 & = & \sum_i t_i (t_i-1) (t_i-2) \sum_j u_j (u_j-1) (u_j-2) / (9 n (n-1) (n-2)) \end This is sometimes referred to as the Mann-Kendall test.


Algorithms

The direct computation of the numerator n_c - n_d, involves two nested iterations, as characterized by the following pseudocode: numer := 0 for i := 2..N do for j := 1..(i − 1) do numer := numer + sign(x − x × sign(y − y return numer Although quick to implement, this algorithm is O(n^2) in complexity and becomes very slow on large samples. A more sophisticated algorithm built upon the
Merge Sort In computer science, merge sort (also commonly spelled as mergesort and as ) is an efficient, general-purpose, and comparison sort, comparison-based sorting algorithm. Most implementations of merge sort are Sorting algorithm#Stability, stable, wh ...
algorithm can be used to compute the numerator in O(n \cdot \log) time. Begin by ordering your data points sorting by the first quantity, x, and secondarily (among ties in x) by the second quantity, y. With this initial ordering, y is not sorted, and the core of the algorithm consists of computing how many steps a
Bubble Sort Bubble sort, sometimes referred to as sinking sort, is a simple sorting algorithm that repeatedly steps through the input list element by element, comparing the current element with the one after it, Swap (computer science), swapping their values ...
would take to sort this initial y. An enhanced
Merge Sort In computer science, merge sort (also commonly spelled as mergesort and as ) is an efficient, general-purpose, and comparison sort, comparison-based sorting algorithm. Most implementations of merge sort are Sorting algorithm#Stability, stable, wh ...
algorithm, with O(n \log n) complexity, can be applied to compute the number of swaps, S(y), that would be required by a
Bubble Sort Bubble sort, sometimes referred to as sinking sort, is a simple sorting algorithm that repeatedly steps through the input list element by element, comparing the current element with the one after it, Swap (computer science), swapping their values ...
to sort y_i. Then the numerator for \tau is computed as: :n_c-n_d = n_0 - n_1 - n_2 + n_3 - 2 S(y), where n_3 is computed like n_1 and n_2, but with respect to the joint ties in x and y. A
Merge Sort In computer science, merge sort (also commonly spelled as mergesort and as ) is an efficient, general-purpose, and comparison sort, comparison-based sorting algorithm. Most implementations of merge sort are Sorting algorithm#Stability, stable, wh ...
partitions the data to be sorted, y into two roughly equal halves, y_\mathrm and y_\mathrm, then sorts each half recursively, and then merges the two sorted halves into a fully sorted vector. The number of
Bubble Sort Bubble sort, sometimes referred to as sinking sort, is a simple sorting algorithm that repeatedly steps through the input list element by element, comparing the current element with the one after it, Swap (computer science), swapping their values ...
swaps is equal to: :S(y) = S(y_\mathrm) + S(y_\mathrm) + M(Y_\mathrm,Y_\mathrm) where Y_\mathrm and Y_\mathrm are the sorted versions of y_\mathrm and y_\mathrm, and M(\cdot,\cdot) characterizes the
Bubble Sort Bubble sort, sometimes referred to as sinking sort, is a simple sorting algorithm that repeatedly steps through the input list element by element, comparing the current element with the one after it, Swap (computer science), swapping their values ...
swap-equivalent for a merge operation. M(\cdot,\cdot) is computed as depicted in the following pseudo-code: function M(L ..n R ..m is i := 1 j := 1 nSwaps := 0 while i ≤ n and j ≤ m do if R < L then nSwaps := nSwaps + n − i + 1 j := j + 1 else i := i + 1 return nSwaps A side effect of the above steps is that you end up with both a sorted version of x and a sorted version of y. With these, the factors t_i and u_j used to compute \tau_B are easily obtained in a single linear-time pass through the sorted arrays.


Approximating Kendall rank correlation from a stream

Efficient algorithms for calculating the Kendall rank correlation coefficient as per the standard estimator have O(n \cdot \log) time complexity. However, these algorithms necessitate the availability of all data to determine observation ranks, posing a challenge in sequential data settings where observations are revealed incrementally. Fortunately, algorithms do exist to estimate the Kendall rank correlation coefficient in sequential settings. These algorithms have O(1) update time and space complexity, scaling efficiently with the number of observations. Consequently, when processing a batch of n observations, the time complexity becomes O(n), while space complexity remains a constant O(1). The first such algorithm presents an approximation to the Kendall rank correlation coefficient based on coarsening the joint distribution of the random variables. Non-stationary data is treated via a moving window approach. This algorithm is simple and is able to handle discrete random variables along with continuous random variables without modification. The second algorithm is based on Hermite series estimators and utilizes an alternative estimator for the exact Kendall rank correlation coefficient i.e. for the probability of concordance minus the probability of discordance of pairs of bivariate observations. This alternative estimator also serves as an approximation to the standard estimator. This algorithm is only applicable to continuous random variables, but it has demonstrated superior accuracy and potential speed gains compared to the first algorithm described, along with the capability to handle non-stationary data without relying on sliding windows. An efficient implementation of the Hermite series based approach is contained in the R package packag
hermiter


Software Implementations

* R implements the test for \tau_B">cor.test(x, y, method = "kendall")
in its "stats" package (also cor(x, y, method = "kendall") will work, but the latter does not return the p-value). All three versions of the coefficient are available in the "DescTools" package along with the confidence intervals: KendallTauA(x,y,conf.level=0.95) for \tau_A, KendallTauB(x,y,conf.level=0.95) for \tau_B, StuartTauC(x,y,conf.level=0.95) for \tau_C. Fast batch estimates of the Kendall rank correlation coefficient along with sequential estimates are provided for in the packag
hermiter
* For Python, the
SciPy SciPy (pronounced "sigh pie") is a free and open-source Python library used for scientific computing and technical computing. SciPy contains modules for optimization, linear algebra, integration, interpolation, special functions, fast Fourier ...
library implements the computation of \tau_B i
scipy.stats.kendalltau
* In
Stata Stata (, , alternatively , occasionally stylized as STATA) is a general-purpose Statistics, statistical software package developed by StataCorp for data manipulation, visualization, statistics, and automated reporting. It is used by researchers ...
is implemeted as ktau varlist.


See also

*
Correlation In statistics, correlation or dependence is any statistical relationship, whether causal or not, between two random variables or bivariate data. Although in the broadest sense, "correlation" may indicate any type of association, in statistics ...
* Kendall tau distance * Kendall's W *
Spearman's rank correlation coefficient In statistics, Spearman's rank correlation coefficient or Spearman's ''ρ'' is a number ranging from -1 to 1 that indicates how strongly two sets of ranks are correlated. It could be used in a situation where one only has ranked data, such as a ...
*
Goodman and Kruskal's gamma In statistics, Goodman and Kruskal's gamma is a measure of rank correlation, i.e., the similarity of the orderings of the data when ranked by each of the quantities. It measures the strength of association of the cross tabulated data when both v ...
*
Theil–Sen estimator In non-parametric statistics, the Theil–Sen estimator is a method for robustly fitting a line to sample points in the plane (simple linear regression) by choosing the median of the slopes of all lines through pairs of points. It has also b ...
*
Mann–Whitney U test The Mann–Whitney U test (also called the Mann–Whitney–Wilcoxon (MWW/MWU), Wilcoxon rank-sum test, or Wilcoxon–Mann–Whitney test) is a nonparametric statistical test of the null hypothesis that randomly selected values ''X'' and ''Y'' f ...
- it is equivalent to Kendall's tau correlation coefficient if one of the variables is binary.


References


Further reading

* * * *


External links


Tied rank calculation



Online software: computes Kendall's tau rank correlation
{{DEFAULTSORT:Kendall Tau Rank Correlation Coefficient Covariance and correlation Nonparametric statistics Statistical tests Independence (probability theory) de:Rangkorrelationskoeffizient#Kendalls Tau