
In
statistics, the Kolmogorov–Smirnov test (K–S test or KS test) is a
nonparametric test of the equality of continuous (or discontinuous, see
Section 2.2), one-dimensional
probability distributions that can be used to compare a
sample with a reference probability distribution (one-sample K–S test), or to compare two samples (two-sample K–S test). In essence, the test answers the question "What is the probability that this collection of samples could have been drawn from that probability distribution?" or, in the second case, "What is the probability that these two sets of samples were drawn from the same (but unknown) probability distribution?".
It is named after
Andrey Kolmogorov and
Nikolai Smirnov.
The Kolmogorov–Smirnov statistic quantifies a
distance between the
empirical distribution function of the sample and the
cumulative distribution function
In probability theory and statistics, the cumulative distribution function (CDF) of a real-valued random variable X, or just distribution function of X, evaluated at x, is the probability that X will take a value less than or equal to x.
Ev ...
of the reference distribution, or between the empirical distribution functions of two samples. The
null distribution of this statistic is calculated under the
null hypothesis that the sample is drawn from the reference distribution (in the one-sample case) or that the samples are drawn from the same distribution (in the two-sample case). In the one-sample case, the distribution considered under the null hypothesis may be continuous (see
Section 2), purely discrete or mixed (see
Section 2.2). In the two-sample case (see
Section 3), the distribution considered under the null hypothesis is a continuous distribution but is otherwise unrestricted. However, the
two sample test can also be performed under more general conditions that allow for discontinuity, heterogeneity and dependence across samples.
The two-sample K–S test is one of the most useful and general nonparametric methods for comparing two samples, as it is sensitive to differences in both location and shape of the empirical cumulative distribution functions of the two samples.
The Kolmogorov–Smirnov test can be modified to serve as a
goodness of fit
The goodness of fit of a statistical model describes how well it fits a set of observations. Measures of goodness of fit typically summarize the discrepancy between observed values and the values expected under the model in question. Such measure ...
test. In the special case of testing for
normality of the distribution, samples are standardized and compared with a standard normal distribution. This is equivalent to setting the mean and variance of the reference distribution equal to the sample estimates, and it is known that using these to define the specific reference distribution changes the null distribution of the test statistic (see
Test with estimated parameters). Various studies have found that, even in this corrected form, the test is less powerful for testing normality than the
Shapiro–Wilk test
The Shapiro–Wilk test is a test of normality in frequentist statistics. It was published in 1965 by Samuel Sanford Shapiro and Martin Wilk.
Theory
The Shapiro–Wilk test tests the null hypothesis that a sample ''x''1, ..., ''x'n'' came fr ...
or
Anderson–Darling test. However, these other tests have their own disadvantages. For instance the Shapiro–Wilk test is known not to work well in samples with many identical values.
One-sample Kolmogorov–Smirnov statistic
The
empirical distribution function ''F''
''n'' for ''n''
independent and identically distributed (i.i.d.) ordered observations ''X
i'' is defined as
:
:where
is the
indicator function, equal to 1 if
and equal to 0 otherwise.
The Kolmogorov–Smirnov
statistic for a given
cumulative distribution function
In probability theory and statistics, the cumulative distribution function (CDF) of a real-valued random variable X, or just distribution function of X, evaluated at x, is the probability that X will take a value less than or equal to x.
Ev ...
''F''(''x'') is
:
where sup
''x'' is the
supremum of the set of distances. Intuitively, the statistic takes the largest absolute difference between the two distribution functions across all ''x'' values.
By the
Glivenko–Cantelli theorem, if the sample comes from distribution ''F''(''x''), then ''D''
''n'' converges to 0
almost surely in the limit when
goes to infinity. Kolmogorov strengthened this result, by effectively providing the rate of this convergence (see
Kolmogorov distribution).
Donsker's theorem provides a yet stronger result.
In practice, the statistic requires a relatively large number of data points (in comparison to other goodness of fit criteria such as the
Anderson–Darling test statistic) to properly reject the null hypothesis.
Kolmogorov distribution

The Kolmogorov distribution is the distribution of the
random variable
:
where ''B''(''t'') is the
Brownian bridge
A Brownian bridge is a continuous-time stochastic process ''B''(''t'') whose probability distribution is the conditional probability distribution of a standard Wiener process ''W''(''t'') (a mathematical model of Brownian motion) subject to the co ...
. The
cumulative distribution function
In probability theory and statistics, the cumulative distribution function (CDF) of a real-valued random variable X, or just distribution function of X, evaluated at x, is the probability that X will take a value less than or equal to x.
Ev ...
of ''K'' is given by
:
which can also be expressed by the
Jacobi theta function . Both the form of the Kolmogorov–Smirnov test statistic and its asymptotic distribution under the null hypothesis were published by
Andrey Kolmogorov,
while a table of the distribution was published by
Nikolai Smirnov. Recurrence relations for the distribution of the test statistic in finite samples are available.
[
Under null hypothesis that the sample comes from the hypothesized distribution ''F''(''x''),
:
in distribution, where ''B''(''t'') is the Brownian bridge. If ''F'' is continuous then under the null hypothesis converges to the Kolmogorov distribution, which does not depend on ''F''. This result may also be known as the Kolmogorov theorem.
The accuracy of this limit as an approximation to the exact cdf of when is finite is not very impressive: even when , the corresponding maximum error is about ; this error increases to when and to a totally unacceptable when . However, a very simple expedient of replacing by
:
in the argument of the Jacobi theta function reduces these errors to
, , and respectively; such accuracy would be usually considered more than adequate for all practical applications.
The ''goodness-of-fit'' test or the Kolmogorov–Smirnov test can be constructed by using the critical values of the Kolmogorov distribution. This test is asymptotically valid when It rejects the null hypothesis at level if
:
where ''K''''α'' is found from
:
The asymptotic power of this test is 1.
Fast and accurate algorithms to compute the cdf or its complement for arbitrary and , are available from:
* ] and for continuous null distributions with code in C and Java to be found in.[
* ] for purely discrete, mixed or continuous null distribution implemented in the KSgeneral package of the R project for statistical computing, which for a given sample also computes the KS test statistic and its p-value. Alternative C++ implementation is available from.[
]
Test with estimated parameters
If either the form or the parameters of ''F''(''x'') are determined from the data ''X''''i'' the critical values determined in this way are invalid. In such cases, Monte Carlo or other methods may be required, but tables have been prepared for some cases. Details for the required modifications to the test statistic and for the critical values for the normal distribution
In statistics, a normal distribution or Gaussian distribution is a type of continuous probability distribution for a real-valued random variable. The general form of its probability density function is
:
f(x) = \frac e^
The parameter \mu i ...
and the exponential distribution have been published, and later publications also include the Gumbel distribution. The Lilliefors test represents a special case of this for the normal distribution. The logarithm transformation may help to overcome cases where the Kolmogorov test data does not seem to fit the assumption that it came from the normal distribution.
Using estimated parameters, the question arises which estimation method should be used. Usually this would be the maximum likelihood method, but e.g. for the normal distribution MLE has a large bias error on sigma. Using a moment fit or KS minimization instead has a large impact on the critical values, and also some impact on test power. If we need to decide for Student-T data with df = 2 via KS test whether the data could be normal or not, then a ML estimate based on H0 (data is normal, so using the standard deviation for scale) would give much larger KS distance, than a fit with minimum KS. In this case we should reject H0, which is often the case with MLE, because the sample standard deviation might be very large for T-2 data, but with KS minimization we may get still a too low KS to reject H0. In the Student-T case, a modified KS test with KS estimate instead of MLE, makes the KS test indeed slightly worse. However, in other cases, such a modified KS test leads to slightly better test power.
Discrete and mixed null distribution
Under the assumption that is non-decreasing and right-continuous, with countable (possibly infinite) number of jumps, the KS test statistic can be expressed as:
:
From the right-continuity of , it follows that and and hence, the distribution of depends on the null distribution , i.e., is no longer distribution-free as in the continuous case. Therefore, a fast and accurate method has been developed to compute the exact and asymptotic distribution of when is purely discrete or mixed,[ implemented in C++ and in the KSgeneral package ][ of the R language. The functions ]disc_ks_test()
, mixed_ks_test()
and cont_ks_test()
compute also the KS test statistic and p-values for purely discrete, mixed or continuous null distributions and arbitrary sample sizes. The KS test and its p-values for discrete null distributions and small sample sizes are also computed in as part of the dgof package of the R language. Major statistical packages among which SAS
SAS or Sas may refer to:
Arts, entertainment, and media
* ''SAS'' (novel series), a French book series by Gérard de Villiers
* ''Shimmer and Shine'', an American animated children's television series
* Southern All Stars, a Japanese rock ba ...
PROC NPAR1WAY
, Stata ksmirnov
implement the KS test under the assumption that is continuous, which is more conservative if the null distribution is actually not continuous (see
).
Two-sample Kolmogorov–Smirnov test
The Kolmogorov–Smirnov test may also be used to test whether two underlying one-dimensional probability distributions differ. In this case, the Kolmogorov–Smirnov statistic is
:
where and are the empirical distribution functions of the first and the second sample respectively, and is the supremum function.
For large samples, the null hypothesis is rejected at level if
:
Where and are the sizes of first and second sample respectively. The value of is given in the table below for the most common levels of
and in general by
:
so that the condition reads
:
Here, again, the larger the sample sizes, the more sensitive the minimal bound: For a given ratio of sample sizes (e.g. ), the minimal bound scales in the size of either of the samples according to its inverse square root.
Note that the two-sample test checks whether the two data samples come from the same distribution. This does not specify what that common distribution is (e.g. whether it's normal or not normal). Again, tables of critical values have been published. A shortcoming of the univariate Kolmogorov–Smirnov test is that it is not very powerful because it is devised to be sensitive against all possible types of differences between two distribution functions. Some argue that the Cucconi test, originally proposed for simultaneously comparing location and scale, can be much more powerful than the Kolmogorov–Smirnov test when comparing two distribution functions.
Setting confidence limits for the shape of a distribution function
While the Kolmogorov–Smirnov test is usually used to test whether a given ''F''(''x'') is the underlying probability distribution of ''F''''n''(''x''), the procedure may be inverted to give confidence limits on ''F''(''x'') itself. If one chooses a critical value of the test statistic ''D''''α'' such that P(''D''''n'' > ''D''''α'') = ''α'', then a band of width ±''D''''α'' around ''F''''n''(''x'') will entirely contain ''F''(''x'') with probability 1 − ''α''.
The Kolmogorov–Smirnov statistic in more than one dimension
A distribution-free multivariate Kolmogorov–Smirnov goodness of fit test has been proposed by Justel, Peña and Zamar (1997). The test uses a statistic which is built using Rosenblatt's transformation, and an algorithm is developed to compute it in the bivariate case. An approximate test that can be easily computed in any dimension is also presented.
The Kolmogorov–Smirnov test statistic needs to be modified if a similar test is to be applied to multivariate data. This is not straightforward because the maximum difference between two joint cumulative distribution function
In probability theory and statistics, the cumulative distribution function (CDF) of a real-valued random variable X, or just distribution function of X, evaluated at x, is the probability that X will take a value less than or equal to x.
Ev ...
s is not generally the same as the maximum difference of any of the complementary distribution functions. Thus the maximum difference will differ depending on which of or or any of the other two possible arrangements is used. One might require that the result of the test used should not depend on which choice is made.
One approach to generalizing the Kolmogorov–Smirnov statistic to higher dimensions which meets the above concern is to compare the cdfs of the two samples with all possible orderings, and take the largest of the set of resulting KS statistics. In ''d'' dimensions, there are 2''d'' − 1 such orderings. One such variation is due to Peacock
(see also Gosset
for a 3D version)
and another to Fasano and Franceschini (see Lopes et al. for a comparison and computational details). Critical values for the test statistic can be obtained by simulations, but depend on the dependence structure in the joint distribution.
In one dimension, the Kolmogorov–Smirnov statistic is identical to the so-called star discrepancy D, so another native KS extension to higher dimensions would be simply to use D also for higher dimensions. Unfortunately, the star discrepancy is hard to calculate in high dimensions.
In 2021 the functional form of the multivariate KS test statistic was proposed, which simplified the problem of estimating the tail probabilities of the multivariate KS test statistic, which is needed for the statistical test. For the multivariate case, if ''F''''i'' is the ''i''th continuous marginal from a probability distribution with ''k'' variables, then
:
so the limiting distribution does not depend on the marginal distributions.
Implementations
The Kolmogorov–Smirnov test is implemented in many software programs. Most of these implement both the one and two sampled test.
* Mathematica ha
KolmogorovSmirnovTest
* MATLAB's Statistics Toolbox ha
kstest
an
for one-sample and two-sample Kolmogorov–Smirnov tests, respectively.
* The R package "KSgeneral"[ computes the KS test statistics and its p-values under arbitrary, possibly discrete, mixed or continuous null distribution.
* R's statistics base-package implements the test a]
ks.test
in its "stats" package.
* SAS
SAS or Sas may refer to:
Arts, entertainment, and media
* ''SAS'' (novel series), a French book series by Gérard de Villiers
* ''Shimmer and Shine'', an American animated children's television series
* Southern All Stars, a Japanese rock ba ...
implements the test in its PROC NPAR1WAY procedure.
* In Python, the SciPy package implements the test in the scipy.stats.kstest function.
* SYSTAT (SPSS Inc., Chicago, IL)
* Java
Java (; id, Jawa, ; jv, ꦗꦮ; su, ) is one of the Greater Sunda Islands in Indonesia. It is bordered by the Indian Ocean to the south and the Java Sea to the north. With a population of 151.6 million people, Java is the world's mo ...
has an implementation of this test provided by Apache Commons.
* KNIME has a node implementing this test based on the above Java implementation.
* Julia has the packag
HypothesisTests.jl
with the function ExactOneSampleKSTest(x::AbstractVector, d::UnivariateDistribution).
*StatsDirect
StatsDirect is a statistical software package designed for biomedical, public health, and general health science uses. The second generation of the software was reviewed in general medical and public health journals.
Features and use
StatsDire ...
(StatsDirect Ltd, Manchester, UK) implement
all common variants
* Stata (Stata Corporation, College Station, TX) implements the test in ksmirnov (Kolmogorov–Smirnov equality-of-distributions test) command.
* PSPP implements the test in it
KOLMOGOROV-SMIRNOV
(or using KS shortcut function).
* The Real Statistics Resource Pack for Excel
ExCeL London (an abbreviation for Exhibition Centre London) is an exhibition centre, international convention centre and former hospital in the Custom House area of Newham, East London. It is situated on a site on the northern quay of the ...
runs the test as KSCRIT and KSPROB.
See also
* Lepage test
* Cucconi test
* Kuiper's test
*Shapiro–Wilk test
The Shapiro–Wilk test is a test of normality in frequentist statistics. It was published in 1965 by Samuel Sanford Shapiro and Martin Wilk.
Theory
The Shapiro–Wilk test tests the null hypothesis that a sample ''x''1, ..., ''x'n'' came fr ...
* Anderson–Darling test
* Cramér–von Mises test
References
Further reading
*
*
*
*
*
External links
*
Short introduction
JavaScript implementation of one- and two-sided tests
Online calculator with the KS test
* Open-source C++ code to compute th
and perform th
*Paper o
Evaluating Kolmogorov’s Distribution
contains C implementation. This is the method used in Matlab.
*Paper o
Computing the Two-Sided Kolmogorov–Smirnov Distribution
computing the cdf of the KS statistic in C or Java.
*Pape
powerlaw: A Python Package for Analysis of Heavy-Tailed Distributions
Jeff Alstott, Ed Bullmore, Dietmar Plenz. Among others, it also performs the Kolmogorov–Smirnov test. Source code and installers of powerlaw package are available a
PyPi
{{DEFAULTSORT:Kolmogorov-Smirnov Test
Statistical tests
Statistical distance
Nonparametric statistics
Normality tests