In
statistics, Cramér's V (sometimes referred to as Cramér's phi and denoted as φ
''c'') is a measure of
association between two
nominal variables, giving a value between 0 and +1 (inclusive). It is based on
Pearson's chi-squared statistic and was published by
Harald Cramér
Harald Cramér (; 25 September 1893 – 5 October 1985) was a Swedish mathematician, actuary, and statistician, specializing in mathematical statistics and probabilistic number theory. John Kingman described him as "one of the giants of stati ...
in 1946.
Usage and interpretation
φ
''c'' is the intercorrelation of two discrete variables
[Sheskin, David J. (1997). Handbook of Parametric and Nonparametric Statistical Procedures. Boca Raton, Fl: CRC Press.] and may be used with variables having two or more levels. φ
''c'' is a symmetrical measure: it does not matter which variable we place in the columns and which in the rows. Also, the order of rows/columns doesn't matter, so φ
''c'' may be used with nominal data types or higher (notably, ordered or numerical).
Cramér's V may also be applied to
goodness of fit
The goodness of fit of a statistical model describes how well it fits a set of observations. Measures of goodness of fit typically summarize the discrepancy between observed values and the values expected under the model in question. Such measure ...
chi-squared models when there is a 1 × ''k'' table (in this case ''r'' = 1). In this case ''k'' is taken as the number of optional outcomes and it functions as a measure of tendency towards a single outcome.
Cramér's V varies from 0 (corresponding to
no association between the variables) to 1 (complete association) and can reach 1 only when each variable is completely determined by the other. It may be viewed as the association between two variables as a percentage of their maximum possible variation.
φ
''c''2 is the mean square
canonical correlation between the variables.
In the case of a 2 × 2
contingency table
In statistics, a contingency table (also known as a cross tabulation or crosstab) is a type of table in a matrix format that displays the (multivariate) frequency distribution of the variables. They are heavily used in survey research, business ...
Cramér's V is equal to the absolute value of
Phi coefficient.
Note that as chi-squared values tend to increase with the number of cells, the greater the difference between ''r'' (rows) and ''c'' (columns), the more likely φ
c will tend to 1 without strong evidence of a meaningful correlation.
[Cohen, J. (1988). Statistical Power Analysis for the Behavioral Sciences (2nd ed.). Routledge. https://doi.org/10.4324/9780203771587 p78-81.]
Calculation
Let a sample of size ''n'' of the simultaneously distributed variables
and
for
be given by the frequencies
:
number of times the values
were observed.
The chi-squared statistic then is:
:
where
is the number of times the value
is observed and
is the number of times the value
is observed.
Cramér's V is computed by taking the square root of the chi-squared statistic divided by the sample size and the minimum dimension minus 1:
:
where:
*
is the phi coefficient.
*
is derived from Pearson's chi-squared test
*
is the grand total of observations and
*
being the number of columns.
*
being the number of rows.
The
p-value
In null-hypothesis significance testing, the ''p''-value is the probability of obtaining test results at least as extreme as the result actually observed, under the assumption that the null hypothesis is correct. A very small ''p''-value means ...
for the
significance of ''V'' is the same one that is calculated using the
Pearson's chi-squared test
Pearson's chi-squared test (\chi^2) is a statistical test applied to sets of categorical data to evaluate how likely it is that any observed difference between the sets arose by chance. It is the most widely used of many chi-squared tests (e.g ...
.
The formula for the variance of ''V''=φ
''c'' is known.
In R, the function
cramerV()
from the package
rcompanion
calculates ''V'' using the chisq.test function from the stats package. In contrast to the function
cramersV()
from the
lsr
package,
cramerV()
also offers an option to correct for bias. It applies the correction described in the following section.
Bias correction
Cramér's V can be a heavily biased estimator of its population counterpart and will tend to overestimate the strength of association. A bias correction, using the above notation, is given by
:
where
:
and
:
:
Then
estimates the same population quantity as Cramér's V but with typically much smaller
mean squared error
In statistics, the mean squared error (MSE) or mean squared deviation (MSD) of an estimator (of a procedure for estimating an unobserved quantity) measures the average of the squares of the errors—that is, the average squared difference betwe ...
. The rationale for the correction is that under independence,
.
See also
Other measures of correlation for nominal data:
* The
phi coefficient
*
Tschuprow's T
* The
uncertainty coefficient
* The
Lambda coefficient
Lambda (}, ''lám(b)da'') is the 11th letter of the Greek alphabet, representing the voiced alveolar lateral approximant . In the system of Greek numerals, lambda has a value of 30. Lambda is derived from the Phoenician Lamed . Lambda gave ...
* The
Rand index
*
Davies–Bouldin index
*
Dunn index
*
Jaccard index
*
Fowlkes–Mallows index
Other related articles:
*
Contingency table
In statistics, a contingency table (also known as a cross tabulation or crosstab) is a type of table in a matrix format that displays the (multivariate) frequency distribution of the variables. They are heavily used in survey research, business ...
*
Effect size
In statistics, an effect size is a value measuring the strength of the relationship between two variables in a population, or a sample-based estimate of that quantity. It can refer to the value of a statistic calculated from a sample of data, the ...
*
References
External links
A Measure of Association for Nonparametric Statistics(Alan C. Acock and Gordon R. Stavig Page 1381 of 1381–1386)
from the homepage of Pat Dattalo.
{{DEFAULTSORT:Cramer's V
Statistical ratios
Summary statistics for contingency tables
Covariance and correlation