Statistical distribution
   HOME

TheInfoList



OR:

In statistics, an empirical distribution function (commonly also called an empirical Cumulative Distribution Function, eCDF) is the distribution function associated with the
empirical measure In probability theory, an empirical measure is a random measure arising from a particular realization of a (usually finite) sequence of random variables. The precise definition is found below. Empirical measures are relevant to mathematical sta ...
of a sample. This cumulative distribution function is a
step function In mathematics, a function on the real numbers is called a step function if it can be written as a finite linear combination of indicator functions of intervals. Informally speaking, a step function is a piecewise constant function having onl ...
that jumps up by at each of the data points. Its value at any specified value of the measured variable is the fraction of observations of the measured variable that are less than or equal to the specified value. The empirical distribution function is an estimate of the cumulative distribution function that generated the points in the sample. It converges with probability 1 to that underlying distribution, according to the
Glivenko–Cantelli theorem In the theory of probability, the Glivenko–Cantelli theorem (sometimes referred to as the Fundamental Theorem of Statistics), named after Valery Ivanovich Glivenko and Francesco Paolo Cantelli, determines the asymptotic behaviour of the empir ...
. A number of results exist to quantify the rate of convergence of the empirical distribution function to the underlying cumulative distribution function.


Definition

Let be
independent, identically distributed In probability theory and statistics, a collection of random variables is independent and identically distributed if each random variable has the same probability distribution as the others and all are mutually independent. This property is us ...
real random variables with the common cumulative distribution function . Then the empirical distribution function is defined as :\widehat F_n(t) = \frac = \frac \sum_^n \mathbf_, where \mathbf_ is the
indicator Indicator may refer to: Biology * Environmental indicator of environmental health (pressures, conditions and responses) * Ecological indicator of ecosystem health (ecological processes) * Health indicator, which is used to describe the health ...
of
event Event may refer to: Gatherings of people * Ceremony, an event of ritual significance, performed on a special occasion * Convention (meeting), a gathering of individuals engaged in some common interest * Event management, the organization of e ...
. For a fixed , the indicator \mathbf_ is a
Bernoulli random variable In probability theory and statistics, the Bernoulli distribution, named after Swiss mathematician Jacob Bernoulli,James Victor Uspensky: ''Introduction to Mathematical Probability'', McGraw-Hill, New York 1937, page 45 is the discrete probabili ...
with parameter ; hence n \widehat F_n(t) is a binomial random variable with
mean There are several kinds of mean in mathematics, especially in statistics. Each mean serves to summarize a given group of data, often to better understand the overall value (magnitude and sign) of a given data set. For a data set, the '' ari ...
and
variance In probability theory and statistics, variance is the expectation of the squared deviation of a random variable from its population mean or sample mean. Variance is a measure of dispersion, meaning it is a measure of how far a set of numbe ...
. This implies that \widehat F_n(t) is an
unbiased Bias is a disproportionate weight ''in favor of'' or ''against'' an idea or thing, usually in a way that is closed-minded, prejudicial, or unfair. Biases can be innate or learned. People may develop biases for or against an individual, a group, ...
estimator for . However, in some textbooks, the definition is given as \widehat F_n(t) = \frac \sum_^n \mathbf_Madsen, H.O., Krenk, S., Lind, S.C. (2006) ''Methods of Structural Safety''. Dover Publications. p. 148-149.


Mean

The
mean There are several kinds of mean in mathematics, especially in statistics. Each mean serves to summarize a given group of data, often to better understand the overall value (magnitude and sign) of a given data set. For a data set, the '' ari ...
of the empirical distribution is an
unbiased estimator In statistics, the bias of an estimator (or bias function) is the difference between this estimator's expected value and the true value of the parameter being estimated. An estimator or decision rule with zero bias is called ''unbiased''. In sta ...
of the mean of the population distribution. E_n(X) = \frac\left (\sum_^n\right ) which is more commonly denoted \bar


Variance

The
variance In probability theory and statistics, variance is the expectation of the squared deviation of a random variable from its population mean or sample mean. Variance is a measure of dispersion, meaning it is a measure of how far a set of numbe ...
of the empirical distribution times \tfrac is an unbiased estimator of the variance of the population distribution, for any distribution of X that has a finite variance. \begin \operatorname(X) &= \operatorname\left X_-_\operatorname[X^2\right.html"_;"title=".html"_;"title="X_-_\operatorname[X">X_-_\operatorname[X^2\right">.html"_;"title="X_-_\operatorname[X">X_-_\operatorname[X^2\right\\[4pt.html" ;"title="">X_-_\operatorname[X^2\right.html" ;"title=".html" ;"title="X - \operatorname[X">X - \operatorname[X^2\right">.html" ;"title="X - \operatorname[X">X - \operatorname[X^2\right\\[4pt">">X_-_\operatorname[X^2\right.html" ;"title=".html" ;"title="X - \operatorname[X">X - \operatorname[X^2\right">.html" ;"title="X - \operatorname[X">X - \operatorname[X^2\right\\[4pt&= \operatorname\left[(X - \bar)^2\right] \\ pt&= \frac\left (\sum_^n\right ) \end


Mean squared error

The mean squared error for the empirical distribution is as follows. \begin \operatorname&=\frac\sum_^n(Y_i-\hat)^2\\ pt&=\operatorname_(\hat)+ \operatorname(\hat,\theta)^2 \end Where \hat is an estimator and \theta an unknown parameter


Quantiles

For any real number a the notation \lceil\rceil (read “ceiling of a”) denotes the least integer greater than or equal to a. For any real number a, the notation \lfloor\rfloor (read “floor of a”) denotes the greatest integer less than or equal to a. If nq is not an integer, then the q-th quantile is unique and is equal to x_ If nq is an integer, then the q-th quantile is not unique and is any real number x such that x_


Empirical median

If n is odd, then the empirical median is the number \tilde = x_ If n is even, then the empirical median is the number \tilde =\frac


Asymptotic properties

Since the ratio approaches 1 as goes to infinity, the asymptotic properties of the two definitions that are given above are the same. By the
strong law of large numbers In probability theory, the law of large numbers (LLN) is a theorem that describes the result of performing the same experiment a large number of times. According to the law, the average of the results obtained from a large number of trials shou ...
, the estimator \scriptstyle\widehat_n(t) converges to as
almost surely In probability theory, an event is said to happen almost surely (sometimes abbreviated as a.s.) if it happens with probability 1 (or Lebesgue measure 1). In other words, the set of possible exceptions may be non-empty, but it has probability 0 ...
, for every value of : : \widehat F_n(t)\ \xrightarrow\ F(t); thus the estimator \scriptstyle\widehat_n(t) is
consistent In classical deductive logic, a consistent theory is one that does not lead to a logical contradiction. The lack of contradiction can be defined in either semantic or syntactic terms. The semantic definition states that a theory is consistent ...
. This expression asserts the pointwise convergence of the empirical distribution function to the true cumulative distribution function. There is a stronger result, called the
Glivenko–Cantelli theorem In the theory of probability, the Glivenko–Cantelli theorem (sometimes referred to as the Fundamental Theorem of Statistics), named after Valery Ivanovich Glivenko and Francesco Paolo Cantelli, determines the asymptotic behaviour of the empir ...
, which states that the convergence in fact happens uniformly over : : \, \widehat F_n-F\, _\infty \equiv \sup_ \big, \widehat F_n(t)-F(t)\big, \ \xrightarrow\ 0. The sup-norm in this expression is called the Kolmogorov–Smirnov statistic for testing the goodness-of-fit between the empirical distribution \scriptstyle\widehat_n(t) and the assumed true cumulative distribution function . Other norm functions may be reasonably used here instead of the sup-norm. For example, the L2-norm gives rise to the Cramér–von Mises statistic. The asymptotic distribution can be further characterized in several different ways. First, the
central limit theorem In probability theory, the central limit theorem (CLT) establishes that, in many situations, when independent random variables are summed up, their properly normalized sum tends toward a normal distribution even if the original variables themsel ...
states that ''pointwise'', \scriptstyle\widehat_n(t) has asymptotically normal distribution with the standard \sqrt rate of convergence: : \sqrt\big(\widehat F_n(t) - F(t)\big)\ \ \xrightarrow\ \ \mathcal\Big( 0, F(t)\big(1-F(t)\big) \Big). This result is extended by the
Donsker’s theorem In probability theory, Donsker's theorem (also known as Donsker's invariance principle, or the functional central limit theorem), named after Monroe D. Donsker, is a functional extension of the central limit theorem. Let X_1, X_2, X_3, \ldots be ...
, which asserts that the ''
empirical process In probability theory, an empirical process is a stochastic process that describes the proportion of objects in a system in a given state. For a process in a discrete state space a population continuous time Markov chain or Markov population model ...
'' \scriptstyle\sqrt(\widehat_n - F), viewed as a function indexed by \scriptstyle t\in\mathbb,
converges in distribution In probability theory, there exist several different notions of convergence of random variables. The convergence of sequences of random variables to some limit random variable is an important concept in probability theory, and its applications to ...
in the
Skorokhod space Anatoliy Volodymyrovych Skorokhod ( uk, Анато́лій Володи́мирович Скорохо́д; September 10, 1930January 3, 2011) was a Soviet and Ukrainian mathematician. Skorokhod is well-known for a comprehensive treatise on the ...
\scriptstyle D \infty, +\infty/math> to the mean-zero Gaussian process \scriptstyle G_F = B \circ F, where is the standard Brownian bridge. The covariance structure of this Gaussian process is : \operatorname ,G_F(t_1)G_F(t_2)\,= F(t_1\wedge t_2) - F(t_1)F(t_2). The uniform rate of convergence in Donsker’s theorem can be quantified by the result known as the
Hungarian embedding Hungarian may refer to: * Hungary, a country in Central Europe * Kingdom of Hungary, state of Hungary, existing between 1000 and 1946 * Hungarians, ethnic groups in Hungary * Hungarian algorithm, a polynomial time algorithm for solving the assignme ...
: : \limsup_ \frac \big\, \sqrt(\widehat F_n-F) - G_\big\, _\infty < \infty, \quad \text Alternatively, the rate of convergence of \scriptstyle\sqrt(\widehat_n-F) can also be quantified in terms of the asymptotic behavior of the sup-norm of this expression. Number of results exist in this venue, for example the
Dvoretzky–Kiefer–Wolfowitz inequality In the theory of probability and statistics, the Dvoretzky–Kiefer–Wolfowitz–Massart inequality (DKW inequality) bounds how close an empirically determined distribution function will be to the distribution function from which the empirical ...
provides bound on the tail probabilities of \scriptstyle\sqrt\, \widehat_n-F\, _\infty: : \Pr\!\Big( \sqrt\, \widehat_n-F\, _\infty > z \Big) \leq 2e^. In fact, Kolmogorov has shown that if the cumulative distribution function is continuous, then the expression \scriptstyle\sqrt\, \widehat_n-F\, _\infty converges in distribution to \scriptstyle\, B\, _\infty, which has the Kolmogorov distribution that does not depend on the form of . Another result, which follows from the law of the iterated logarithm, is that : \limsup_ \frac \leq \frac12, \quad \text and : \liminf_ \sqrt \, \widehat_n-F\, _\infty = \frac, \quad \text


Confidence intervals

As per
Dvoretzky–Kiefer–Wolfowitz inequality In the theory of probability and statistics, the Dvoretzky–Kiefer–Wolfowitz–Massart inequality (DKW inequality) bounds how close an empirically determined distribution function will be to the distribution function from which the empirical ...
the interval that contains the true CDF, F(x), with probability 1-\alpha is specified as : F_n(x) - \varepsilon \le F(x) \le F_n(x) + \varepsilon \; \text \varepsilon = \sqrt. As per the above bounds, we can plot the Empirical CDF, CDF and Confidence intervals for different distributions by using any one of the Statistical implementations. Following is the syntax fro
Statsmodel
for plotting empirical distribution.


Statistical implementation

A non-exhaustive list of software implementations of Empirical Distribution function includes: * I
R software
we compute an empirical cumulative distribution function, with several methods for plotting, printing and computing with such an “ecdf” object. * I

we can use Empirical cumulative distribution function (cdf) plot
jmp from SAS
the CDF plot creates a plot of the empirical cumulative distribution function.
Minitab
create an Empirical CDF

we can fit probability distribution to our data

we can plot Empirical CDF plot

using scipy.stats we can plot the distribution

we can use statsmodels.distributions.empirical_distribution.ECDF

we can use histograms to plot a cumulative distribution

using the seaborn.ecdfplot function
Plotly
using the plotly.express.ecdf function
Excel
we can plot Empirical CDF plot


See also

*
Càdlàg In mathematics, a càdlàg (French: "''continue à droite, limite à gauche''"), RCLL ("right continuous with left limits"), or corlol ("continuous on (the) right, limit on (the) left") function is a function defined on the real numbers (or a subset ...
functions * Count data *
Distribution fitting Probability distribution fitting or simply distribution fitting is the fitting of a probability distribution to a series of data concerning the repeated measurement of a variable phenomenon. The aim of distribution fitting is to predict the probab ...
*
Dvoretzky–Kiefer–Wolfowitz inequality In the theory of probability and statistics, the Dvoretzky–Kiefer–Wolfowitz–Massart inequality (DKW inequality) bounds how close an empirically determined distribution function will be to the distribution function from which the empirical ...
*
Empirical probability The empirical probability, relative frequency, or experimental probability of an event is the ratio of the number of outcomes in which a specified event occurs to the total number of trials, not in a theoretical sample space but in an actual experi ...
*
Empirical process In probability theory, an empirical process is a stochastic process that describes the proportion of objects in a system in a given state. For a process in a discrete state space a population continuous time Markov chain or Markov population model ...
* Estimating quantiles from a sample *
Frequency (statistics) In statistics, the frequency (or absolute frequency) of an event i is the number n_i of times the observation has occurred/recorded in an experiment or study. These frequencies are often depicted graphically or in tabular form. Types The cumul ...
*
Kaplan–Meier estimator The Kaplan–Meier estimator, also known as the product limit estimator, is a non-parametric statistic used to estimate the survival function from lifetime data. In medical research, it is often used to measure the fraction of patients living ...
for censored processes *
Survival function The survival function is a function that gives the probability that a patient, device, or other object of interest will survive past a certain time. The survival function is also known as the survivor function or reliability function. The te ...
*
Q–Q plot In statistics, a Q–Q plot (quantile-quantile plot) is a probability plot, a graphical method for comparing two probability distributions by plotting their ''quantiles'' against each other. A point on the plot corresponds to one of the qu ...


References


Further reading

*


External links

* {{DEFAULTSORT:Empirical Distribution Function Nonparametric statistics Empirical process