In
statistics
Statistics (from German language, German: ', "description of a State (polity), state, a country") is the discipline that concerns the collection, organization, analysis, interpretation, and presentation of data. In applying statistics to a s ...
, Cramér's V (sometimes referred to as Cramér's phi and denoted as φ
''c'') is a measure of
association
Association may refer to:
*Club (organization), an association of two or more people united by a common interest or goal
*Trade association, an organization founded and funded by businesses that operate in a specific industry
*Voluntary associatio ...
between two
nominal variables, giving a value between 0 and +1 (inclusive). It is based on
Pearson's chi-squared statistic and was published by
Harald Cramér
Harald Cramér (; 25 September 1893 – 5 October 1985) was a Swedish mathematician, actuary, and statistician, specializing in mathematical statistics and probabilistic number theory. John Kingman described him as "one of the giants of statis ...
in 1946.
Usage and interpretation
φ
''c'' is the intercorrelation of two discrete variables
[Sheskin, David J. (1997). Handbook of Parametric and Nonparametric Statistical Procedures. Boca Raton, Fl: CRC Press.] and may be used with variables having two or more levels. φ
''c'' is a symmetrical measure: it does not matter which variable we place in the columns and which in the rows. Also, the order of rows/columns does not matter, so φ
''c'' may be used with nominal data types or higher (notably, ordered or numerical).
Cramér's V varies from 0 (corresponding to
no association between the variables) to 1 (complete association) and can reach 1 only when each variable is completely determined by the other. It may be viewed as the association between two variables as a percentage of their maximum possible variation.
φ
''c''2 is the mean square
canonical correlation
In statistics, canonical-correlation analysis (CCA), also called canonical variates analysis, is a way of inferring information from cross-covariance matrices. If we have two vectors ''X'' = (''X''1, ..., ''X'n'') and ''Y'' ...
between the variables.
In the case of a 2 × 2
contingency table
In statistics, a contingency table (also known as a cross tabulation or crosstab) is a type of table in a matrix format that displays the multivariate frequency distribution of the variables. They are heavily used in survey research, business int ...
Cramér's V is equal to the absolute value of
Phi coefficient
In statistics, the phi coefficient, or mean square contingency coefficient, denoted by ''φ'' or ''r'φ'', is a measure of association for two binary variables.
In machine learning, it is known as the Matthews correlation coefficient (MCC) an ...
.
Calculation
Let a sample of size ''n'' of the simultaneously distributed variables
and
for
be given by the frequencies
:
number of times the values
were observed.
The chi-squared statistic then is:
:
where
is the number of times the value
is observed and
is the number of times the value
is observed.
Cramér's V is computed by taking the square root of the chi-squared statistic divided by the sample size and the minimum dimension minus 1:
:
where:
*
is the phi coefficient.
*
is derived from Pearson's chi-squared test
*
is the grand total of observations and
*
being the number of columns.
*
being the number of rows.
The
p-value
In null-hypothesis significance testing, the ''p''-value is the probability of obtaining test results at least as extreme as the result actually observed, under the assumption that the null hypothesis is correct. A very small ''p''-value means ...
for the
significance of ''V'' is the same one that is calculated using the
Pearson's chi-squared test
Pearson's chi-squared test or Pearson's \chi^2 test is a statistical test applied to sets of categorical data to evaluate how likely it is that any observed difference between the sets arose by chance. It is the most widely used of many chi-squa ...
.
The formula for the variance of ''V''=φ
''c'' is known.
In R, the function
cramerV()
from the package
rcompanion
calculates ''V'' using the chisq.test function from the stats package. In contrast to the function
cramersV()
from the
lsr
package,
cramerV()
also offers an option to correct for bias. It applies the correction described in the following section.
Bias correction
Cramér's V can be a heavily biased estimator of its population counterpart and will tend to overestimate the strength of association. A bias correction, using the above notation, is given by
:
where
:
and
:
:
Then
estimates the same population quantity as Cramér's V but with typically much smaller
mean squared error
In statistics, the mean squared error (MSE) or mean squared deviation (MSD) of an estimator (of a procedure for estimating an unobserved quantity) measures the average of the squares of the errors—that is, the average squared difference betwee ...
. The rationale for the correction is that under independence,
.
See also
Other measures of correlation for nominal data:
* The
Percent Maximum Difference
* The
phi coefficient
In statistics, the phi coefficient, or mean square contingency coefficient, denoted by ''φ'' or ''r'φ'', is a measure of association for two binary variables.
In machine learning, it is known as the Matthews correlation coefficient (MCC) an ...
*
Tschuprow's T
In statistics, Tschuprow's ''T'' is a measure of association between two nominal variables, giving a value between 0 and 1 (inclusive). It is closely related to Cramér's V, coinciding with it for square contingency tables.
It was published by ...
* The
uncertainty coefficient
In statistics, the uncertainty coefficient, also called proficiency, entropy coefficient or Theil's U, is a measure of nominal Association (statistics), association. It was first introduced by Henri Theil and is based on the concept of informatio ...
* The
Lambda coefficient
* The
Rand index
The Rand index or Rand measure (named after William M. Rand) in statistics, and in particular in data clustering, is a measure of the similarity between two data clusterings. A form of the Rand index may be defined that is adjusted for the chance ...
*
Davies–Bouldin index
The Davies–Bouldin index (DBI), introduced by David L. Davies and Donald W. Bouldin in 1979, is a metric for evaluating clustering algorithms. This is an internal evaluation scheme, where the validation of how well the clustering has been d ...
*
Dunn index
The Dunn index, introduced by Joseph C. Dunn in 1974, is a metric for evaluating clustering algorithms. This is part of a group of validity indices including the Davies–Bouldin index or Silhouette index, in that it is an internal evaluation sch ...
*
Jaccard index
The Jaccard index is a statistic used for gauging the similarity and diversity of sample sets.
It is defined in general taking the ratio of two sizes (areas or volumes), the intersection size divided by the union size, also called intersection ...
*
Fowlkes–Mallows index
The Fowlkes–Mallows index is an external evaluation method that is used to determine the similarity between two clusterings (clusters obtained after a clustering algorithm), and also a metric to measure confusion matrices. This measure of simi ...
Other related articles:
*
Contingency table
In statistics, a contingency table (also known as a cross tabulation or crosstab) is a type of table in a matrix format that displays the multivariate frequency distribution of the variables. They are heavily used in survey research, business int ...
*
Effect size
In statistics, an effect size is a value measuring the strength of the relationship between two variables in a population, or a sample-based estimate of that quantity. It can refer to the value of a statistic calculated from a sample of data, the ...
*
References
External links
A Measure of Association for Nonparametric Statistics(Alan C. Acock and Gordon R. Stavig Page 1381 of 1381–1386)
from the homepage of Pat Dattalo.
{{DEFAULTSORT:Cramer's V
Statistical ratios
Summary statistics for contingency tables
Covariance and correlation