HOME

TheInfoList



OR:

In
statistics Statistics (from German language, German: ', "description of a State (polity), state, a country") is the discipline that concerns the collection, organization, analysis, interpretation, and presentation of data. In applying statistics to a s ...
, an empirical distribution function ( an empirical cumulative distribution function, eCDF) is the distribution function associated with the empirical measure of a sample. This
cumulative distribution function In probability theory and statistics, the cumulative distribution function (CDF) of a real-valued random variable X, or just distribution function of X, evaluated at x, is the probability that X will take a value less than or equal to x. Ever ...
is a step function that jumps up by at each of the data points. Its value at any specified value of the measured variable is the fraction of observations of the measured variable that are less than or equal to the specified value. The empirical distribution function is an estimate of the cumulative distribution function that generated the points in the sample. It converges with probability 1 to that underlying distribution, according to the
Glivenko–Cantelli theorem In the theory of probability, the Glivenko–Cantelli theorem (sometimes referred to as the fundamental theorem of statistics), named after Valery Ivanovich Glivenko and Francesco Paolo Cantelli, describes the asymptotic behaviour of the empirica ...
. A number of results exist to quantify the rate of
convergence Convergence may refer to: Arts and media Literature *''Convergence'' (book series), edited by Ruth Nanda Anshen *Convergence (comics), "Convergence" (comics), two separate story lines published by DC Comics: **A four-part crossover storyline that ...
of the empirical distribution function to the underlying cumulative distribution function.


Definition

Let be independent, identically distributed real random variables with the common
cumulative distribution function In probability theory and statistics, the cumulative distribution function (CDF) of a real-valued random variable X, or just distribution function of X, evaluated at x, is the probability that X will take a value less than or equal to x. Ever ...
. Then the empirical distribution function is defined as :\widehat F_n(t) = \frac = \frac \sum_^n \mathbf_, where \mathbf_ is the indicator of event . For a fixed , the indicator \mathbf_ is a Bernoulli random variable with parameter ; hence n \widehat F_n(t) is a binomial random variable with
mean A mean is a quantity representing the "center" of a collection of numbers and is intermediate to the extreme values of the set of numbers. There are several kinds of means (or "measures of central tendency") in mathematics, especially in statist ...
and
variance In probability theory and statistics, variance is the expected value of the squared deviation from the mean of a random variable. The standard deviation (SD) is obtained as the square root of the variance. Variance is a measure of dispersion ...
. This implies that \widehat F_n(t) is an
unbiased Bias is a disproportionate weight ''in favor of'' or ''against'' an idea or thing, usually in a way that is inaccurate, closed-minded, prejudicial, or unfair. Biases can be innate or learned. People may develop biases for or against an individ ...
estimator for . However, in some textbooks, the definition is given as :\widehat F_n(t) = \frac \sum_^n \mathbf_


Asymptotic properties

Since the ratio approaches 1 as goes to infinity, the asymptotic properties of the two definitions that are given above are the same. By the strong law of large numbers, the estimator \scriptstyle\widehat_n(t) converges to as
almost surely In probability theory, an event is said to happen almost surely (sometimes abbreviated as a.s.) if it happens with probability 1 (with respect to the probability measure). In other words, the set of outcomes on which the event does not occur ha ...
, for every value of : : \widehat F_n(t)\ \xrightarrow\ F(t); thus the estimator \scriptstyle\widehat_n(t) is
consistent In deductive logic, a consistent theory is one that does not lead to a logical contradiction. A theory T is consistent if there is no formula \varphi such that both \varphi and its negation \lnot\varphi are elements of the set of consequences ...
. This expression asserts the pointwise convergence of the empirical distribution function to the true cumulative distribution function. There is a stronger result, called the
Glivenko–Cantelli theorem In the theory of probability, the Glivenko–Cantelli theorem (sometimes referred to as the fundamental theorem of statistics), named after Valery Ivanovich Glivenko and Francesco Paolo Cantelli, describes the asymptotic behaviour of the empirica ...
, which states that the convergence in fact happens uniformly over : : \, \widehat F_n-F\, _\infty \equiv \sup_ \big, \widehat F_n(t)-F(t)\big, \ \xrightarrow\ 0. The sup-norm in this expression is called the Kolmogorov–Smirnov statistic for testing the goodness-of-fit between the empirical distribution \scriptstyle\widehat_n(t) and the assumed true cumulative distribution function . Other norm functions may be reasonably used here instead of the sup-norm. For example, the L2-norm gives rise to the Cramér–von Mises statistic. The asymptotic distribution can be further characterized in several different ways. First, the
central limit theorem In probability theory, the central limit theorem (CLT) states that, under appropriate conditions, the Probability distribution, distribution of a normalized version of the sample mean converges to a Normal distribution#Standard normal distributi ...
states that ''pointwise'', \scriptstyle\widehat_n(t) has asymptotically normal distribution with the standard \sqrt rate of convergence: : \sqrt\big(\widehat F_n(t) - F(t)\big)\ \ \xrightarrow\ \ \mathcal\Big( 0, F(t)\big(1-F(t)\big) \Big). This result is extended by the Donsker’s theorem, which asserts that the '' empirical process'' \scriptstyle\sqrt(\widehat_n - F), viewed as a function indexed by \scriptstyle t\in\mathbb, converges in distribution in the Skorokhod space \scriptstyle D \infty, +\infty/math> to the mean-zero
Gaussian process In probability theory and statistics, a Gaussian process is a stochastic process (a collection of random variables indexed by time or space), such that every finite collection of those random variables has a multivariate normal distribution. The di ...
\scriptstyle G_F = B \circ F, where is the standard Brownian bridge. The covariance structure of this Gaussian process is : \operatorname ,G_F(t_1)G_F(t_2)\,= F(t_1\wedge t_2) - F(t_1)F(t_2). The uniform rate of convergence in Donsker’s theorem can be quantified by the result known as the Hungarian embedding: : \limsup_ \frac \big\, \sqrt(\widehat F_n-F) - G_\big\, _\infty < \infty, \quad \text Alternatively, the rate of convergence of \scriptstyle\sqrt(\widehat_n-F) can also be quantified in terms of the asymptotic behavior of the sup-norm of this expression. Number of results exist in this venue, for example the Dvoretzky–Kiefer–Wolfowitz inequality provides bound on the tail probabilities of \scriptstyle\sqrt\, \widehat_n-F\, _\infty: : \Pr\!\Big( \sqrt\, \widehat_n-F\, _\infty > z \Big) \leq 2e^. In fact, Kolmogorov has shown that if the cumulative distribution function is continuous, then the expression \scriptstyle\sqrt\, \widehat_n-F\, _\infty converges in distribution to \scriptstyle\, B\, _\infty, which has the Kolmogorov distribution that does not depend on the form of . Another result, which follows from the law of the iterated logarithm, is that : \limsup_ \frac \leq \frac12, \quad \text and : \liminf_ \sqrt \, \widehat_n-F\, _\infty = \frac, \quad \text


Confidence intervals

As per Dvoretzky–Kiefer–Wolfowitz inequality the interval that contains the true CDF, F(x), with probability 1-\alpha is specified as :F_n(x) - \varepsilon \le F(x) \le F_n(x) + \varepsilon \; \text \varepsilon = \sqrt. As per the above bounds, we can plot the Empirical CDF, CDF and confidence intervals for different distributions by using any one of the statistical implementations.


Statistical implementation

A non-exhaustive list of software implementations of Empirical Distribution function includes: * I
R software
we compute an empirical cumulative distribution function, with several methods for plotting, printing and computing with such an “ecdf” object. * I

we can use Empirical cumulative distribution function (cdf) plot
jmp from SAS
the CDF plot creates a plot of the empirical cumulative distribution function.
Minitab
create an Empirical CDF

we can fit probability distribution to our data

we can plot Empirical CDF plot

we can use scipy.stats.ecdf

we can use statsmodels.distributions.empirical_distribution.ECDF

using the matplotlib.pyplot.ecdf function (new in version 3.8.0)

using the seaborn.ecdfplot function
Plotly
using the plotly.express.ecdf function
Excel
we can plot Empirical CDF plot * ArviZ, using th
az.plot_ecdf
function


See also

* Càdlàg functions * Count data * Distribution fitting * Dvoretzky–Kiefer–Wolfowitz inequality * Empirical probability * Empirical process * Estimating quantiles from a sample * Frequency (statistics) * Empirical likelihood * Kaplan–Meier estimator for censored processes * Survival function * Q–Q plot


References


Further reading

*


External links

* {{DEFAULTSORT:Empirical Distribution Function Nonparametric statistics Empirical process