Order statistic
   HOME

TheInfoList



OR:

In statistics, the ''k''th order statistic of a statistical sample is equal to its ''k''th-smallest value. Together with
rank statistics A ranking is a relationship between a set of items such that, for any two items, the first is either "ranked higher than", "ranked lower than" or "ranked equal to" the second. In mathematics, this is known as a weak order or total preorder of ...
, order statistics are among the most fundamental tools in
non-parametric statistics Nonparametric statistics is the branch of statistics that is not based solely on parametrized families of probability distributions (common examples of parameters are the mean and variance). Nonparametric statistics is based on either being dist ...
and inference. Important special cases of the order statistics are the
minimum In mathematical analysis, the maxima and minima (the respective plurals of maximum and minimum) of a function, known collectively as extrema (the plural of extremum), are the largest and smallest value of the function, either within a given r ...
and
maximum In mathematical analysis, the maxima and minima (the respective plurals of maximum and minimum) of a function, known collectively as extrema (the plural of extremum), are the largest and smallest value of the function, either within a given r ...
value of a sample, and (with some qualifications discussed below) the sample median and other sample quantiles. When using
probability theory Probability theory is the branch of mathematics concerned with probability. Although there are several different probability interpretations, probability theory treats the concept in a rigorous mathematical manner by expressing it through a set ...
to analyze order statistics of
random sample In statistics, quality assurance, and survey methodology, sampling is the selection of a subset (a statistical sample) of individuals from within a statistical population to estimate characteristics of the whole population. Statisticians atte ...
s from a
continuous distribution In probability theory and statistics, a probability distribution is the mathematical function that gives the probabilities of occurrence of different possible outcomes for an experiment. It is a mathematical description of a random phenomenon i ...
, the cumulative distribution function is used to reduce the analysis to the case of order statistics of the uniform distribution.


Notation and examples

For example, suppose that four numbers are observed or recorded, resulting in a sample of size 4. If the sample values are :6, 9, 3, 8, the order statistics would be denoted :x_=3,\ \ x_=6,\ \ x_=8,\ \ x_=9,\, where the subscript enclosed in parentheses indicates the th order statistic of the sample. The first order statistic (or smallest order statistic) is always the
minimum In mathematical analysis, the maxima and minima (the respective plurals of maximum and minimum) of a function, known collectively as extrema (the plural of extremum), are the largest and smallest value of the function, either within a given r ...
of the sample, that is, :X_=\min\ where, following a common convention, we use upper-case letters to refer to random variables, and lower-case letters (as above) to refer to their actual observed values. Similarly, for a sample of size , the th order statistic (or largest order statistic) is the
maximum In mathematical analysis, the maxima and minima (the respective plurals of maximum and minimum) of a function, known collectively as extrema (the plural of extremum), are the largest and smallest value of the function, either within a given r ...
, that is, :X_=\max\. The sample range is the difference between the maximum and minimum. It is a function of the order statistics: :\ = X_-X_. A similar important statistic in
exploratory data analysis In statistics, exploratory data analysis (EDA) is an approach of analyzing data sets to summarize their main characteristics, often using statistical graphics and other data visualization methods. A statistical model can be used or not, but pri ...
that is simply related to the order statistics is the sample
interquartile range In descriptive statistics, the interquartile range (IQR) is a measure of statistical dispersion, which is the spread of the data. The IQR may also be called the midspread, middle 50%, fourth spread, or H‑spread. It is defined as the difference ...
. The sample median may or may not be an order statistic, since there is a single middle value only when the number of observations is
odd Odd means unpaired, occasional, strange or unusual, or a person who is viewed as eccentric. Odd may also refer to: Acronym * ODD (Text Encoding Initiative) ("One Document Does it all"), an abstracted literate-programming format for describing X ...
. More precisely, if for some integer , then the sample median is X_ and so is an order statistic. On the other hand, when is
even Even may refer to: General * Even (given name), a Norwegian male personal name * Even (surname) * Even (people), an ethnic group from Siberia and Russian Far East ** Even language, a language spoken by the Evens * Odd and Even, a solitaire game w ...
, and there are two middle values, X_ and X_, and the sample median is some function of the two (usually the average) and hence not an order statistic. Similar remarks apply to all sample quantiles.


Probabilistic analysis

Given any random variables ''X''1, ''X''2..., ''X''''n'', the order statistics X(1), X(2), ..., X(''n'') are also random variables, defined by sorting the values ( realizations) of ''X''1, ..., ''X''''n'' in increasing order. When the random variables ''X''1, ''X''2..., ''X''''n'' form a sample they are
independent and identically distributed In probability theory and statistics, a collection of random variables is independent and identically distributed if each random variable has the same probability distribution as the others and all are mutually independent. This property is usual ...
. This is the case treated below. In general, the random variables ''X''1, ..., ''X''''n'' can arise by sampling from more than one population. Then they are
independent Independent or Independents may refer to: Arts, entertainment, and media Artist groups * Independents (artist group), a group of modernist painters based in the New Hope, Pennsylvania, area of the United States during the early 1930s * Independ ...
, but not necessarily identically distributed, and their joint probability distribution is given by the Bapat–Beg theorem. From now on, we will assume that the random variables under consideration are
continuous Continuity or continuous may refer to: Mathematics * Continuity (mathematics), the opposing concept to discreteness; common examples include ** Continuous probability distribution or random variable in probability and statistics ** Continuous ...
and, where convenient, we will also assume that they have a
probability density function In probability theory, a probability density function (PDF), or density of a continuous random variable, is a function whose value at any given sample (or point) in the sample space (the set of possible values taken by the random variable) ca ...
(PDF), that is, they are
absolutely continuous In calculus, absolute continuity is a smoothness property of functions that is stronger than continuity and uniform continuity. The notion of absolute continuity allows one to obtain generalizations of the relationship between the two central ope ...
. The peculiarities of the analysis of distributions assigning mass to points (in particular,
discrete distribution In probability theory and statistics, a probability distribution is the mathematical function that gives the probabilities of occurrence of different possible outcomes for an experiment. It is a mathematical description of a random phenomenon ...
s) are discussed at the end.


Cumulative distribution function of order statistics

For a random sample as above, with cumulative distribution F_X(x), the order statistics for that sample have cumulative distributions as follows (where ''r'' specifies which order statistic): :F_(x) = \sum_^ \binom nj F_(x)
1 - F_(x) 1 (one, unit, unity) is a number representing a single or the only entity. 1 is also a numerical digit and represents a single unit of counting or measurement. For example, a line segment of ''unit length'' is a line segment of length 1 ...
the corresponding probability density function may be derived from this result, and is found to be :f_(x) = \frac f_(x) F_(x)
1 - F_(x) 1 (one, unit, unity) is a number representing a single or the only entity. 1 is also a numerical digit and represents a single unit of counting or measurement. For example, a line segment of ''unit length'' is a line segment of length 1 ...
. Moreover, there are two special cases, which have CDFs that are easy to compute. :F_(x) = \operatorname(\max\ \leq x) = F_(x) n :F_(x) = \operatorname(\min\ \leq x) = 1-
1 - F_(x) 1 (one, unit, unity) is a number representing a single or the only entity. 1 is also a numerical digit and represents a single unit of counting or measurement. For example, a line segment of ''unit length'' is a line segment of length 1 ...
n Which can be derived by careful consideration of probabilities.


Probability distributions of order statistics


Order statistics sampled from a uniform distribution

In this section we show that the order statistics of the uniform distribution on the
unit interval In mathematics, the unit interval is the closed interval , that is, the set of all real numbers that are greater than or equal to 0 and less than or equal to 1. It is often denoted ' (capital letter ). In addition to its role in real analysis ...
have
marginal distribution In probability theory and statistics, the marginal distribution of a subset of a collection of random variables is the probability distribution of the variables contained in the subset. It gives the probabilities of various values of the varia ...
s belonging to the
beta distribution In probability theory and statistics, the beta distribution is a family of continuous probability distributions defined on the interval , 1in terms of two positive parameters, denoted by ''alpha'' (''α'') and ''beta'' (''β''), that appear as ...
family. We also give a simple method to derive the joint distribution of any number of order statistics, and finally translate these results to arbitrary continuous distributions using the cdf. We assume throughout this section that X_1, X_2, \ldots, X_n is a
random sample In statistics, quality assurance, and survey methodology, sampling is the selection of a subset (a statistical sample) of individuals from within a statistical population to estimate characteristics of the whole population. Statisticians atte ...
drawn from a continuous distribution with cdf F_X. Denoting U_i=F_X(X_i) we obtain the corresponding random sample U_1,\ldots,U_n from the standard uniform distribution. Note that the order statistics also satisfy U_=F_X(X_). The probability density function of the order statistic U_ is equal to. :f_(u)=u^(1-u)^ that is, the ''k''th order statistic of the uniform distribution is a beta-distributed random variable. :U_ \sim \operatorname(k,n+1\mathbfk). The proof of these statements is as follows. For U_ to be between ''u'' and ''u'' + ''du'', it is necessary that exactly ''k'' − 1 elements of the sample are smaller than ''u'', and that at least one is between ''u'' and ''u'' + d''u''. The probability that more than one is in this latter interval is already O(du^2), so we have to calculate the probability that exactly ''k'' − 1, 1 and ''n'' − ''k'' observations fall in the intervals (0,u), (u,u+du) and (u+du,1) respectively. This equals (refer to
multinomial distribution In probability theory, the multinomial distribution is a generalization of the binomial distribution. For example, it models the probability of counts for each side of a ''k''-sided dice rolled ''n'' times. For ''n'' independent trials each of wh ...
for details) :u^\cdot du\cdot(1-u-du)^ and the result follows. The mean of this distribution is ''k'' / (''n'' + 1).


The joint distribution of the order statistics of the uniform distribution

Similarly, for ''i'' < ''j'', the
joint probability density function In probability theory, a probability density function (PDF), or density of a continuous random variable, is a function whose value at any given sample (or point) in the sample space (the set of possible values taken by the random variable) can ...
of the two order statistics ''U''(''i'') < ''U''(''j'') can be shown to be :f_(u,v) = n! which is (up to terms of higher order than O(du\,dv)) the probability that ''i'' − 1, 1, ''j'' − 1 − ''i'', 1 and ''n'' − ''j'' sample elements fall in the intervals (0,u), (u,u+du), (u+du,v), (v,v+dv), (v+dv,1) respectively. One reasons in an entirely analogous way to derive the higher-order joint distributions. Perhaps surprisingly, the joint density of the ''n'' order statistics turns out to be ''constant'': :f_(u_,u_,\ldots,u_) = n!. One way to understand this is that the unordered sample does have constant density equal to 1, and that there are ''n''! different permutations of the sample corresponding to the same sequence of order statistics. This is related to the fact that 1/''n''! is the volume of the region 0. It is also related with another particularity of order statistics of uniform random variables: It follows from the BRS-inequality that the maximum expected number of uniform U(0,1] random variables one can choose from a sample of size n with a sum up not exceeding 0 is bounded above by \sqrt , which is thus invariant on the set of all s, n with constant product s n . Using the above formulas, one can derive the distribution of the range of the order statistics, that is the distribution of U_-U_, i.e. maximum minus the minimum. More generally, for n\geq k>j\geq 1, U_-U_ also has a beta distribution: U_-U_\sim \operatorname(k-j, n-(k-j)+1)From these formulas we can derive the covariance between two order statistics:\operatorname(U_,U_)=\fracThe formula follows from noting that \operatorname(U_-U_)=\operatorname(U_) + \operatorname(U_)-2\cdot \operatorname(U_,U_) =\frac+\frac-2\cdot \operatorname(U_,U_)and comparing that with \operatorname(U)=\fracwhere U\sim \operatorname(k-j,n-(k-j)+1), which is the actual distribution of the difference.


Order statistics sampled from an exponential distribution

For X_1, X_2, .., X_n a random sample of size ''n'' from an exponential distribution with parameter ''λ'', the order statistics ''X''(''i'') for ''i'' = 1,2,3, ..., ''n'' each have distribution ::X_ \stackrel \frac\left( \sum_^i \frac \right) where the ''Z''''j'' are iid standard exponential random variables (i.e. with rate parameter 1). This result was first published by Alfréd Rényi.


Order statistics sampled from an Erlang distribution

The
Laplace transform In mathematics, the Laplace transform, named after its discoverer Pierre-Simon Laplace (), is an integral transform that converts a function of a real variable (usually t, in the '' time domain'') to a function of a complex variable s (in the ...
of order statistics may be sampled from an
Erlang distribution The Erlang distribution is a two-parameter family of continuous probability distributions with support x \in independent exponential distribution">exponential variables with mean 1/\lambda each. Equivalently, it is the distribution of the tim ...
via a path counting method .


The joint distribution of the order statistics of an absolutely continuous distribution

If ''F''''X'' is
absolutely continuous In calculus, absolute continuity is a smoothness property of functions that is stronger than continuity and uniform continuity. The notion of absolute continuity allows one to obtain generalizations of the relationship between the two central ope ...
, it has a density such that dF_X(x)=f_X(x)\,dx, and we can use the substitutions :u=F_X(x) and :du=f_X(x)\,dx to derive the following probability density functions for the order statistics of a sample of size ''n'' drawn from the distribution of ''X'': :f_(x) =\frac _X(x) -F_X(x) f_X(x) :f_(x,y) = \frac _X(x) _X(y)-F_X(x) -F_X(y)f_X(x)f_X(y) where x\le y :f_(x_1,\ldots,x_n)=n!f_X(x_1)\cdots f_X(x_n) where x_1\le x_2\le \dots \le x_n.


Application: confidence intervals for quantiles

An interesting question is how well the order statistics perform as estimators of the
quantile In statistics and probability, quantiles are cut points dividing the range of a probability distribution into continuous intervals with equal probabilities, or dividing the observations in a sample in the same way. There is one fewer quantile th ...
s of the underlying distribution.


A small-sample-size example

The simplest case to consider is how well the sample median estimates the population median. As an example, consider a random sample of size 6. In that case, the sample median is usually defined as the midpoint of the interval delimited by the 3rd and 4th order statistics. However, we know from the preceding discussion that the probability that this interval actually contains the population median is :(1/2)^ = \approx 31\%. Although the sample median is probably among the best distribution-independent
point estimate In statistics, point estimation involves the use of sample data to calculate a single value (known as a point estimate since it identifies a point in some parameter space) which is to serve as a "best guess" or "best estimate" of an unknown popu ...
s of the population median, what this example illustrates is that it is not a particularly good one in absolute terms. In this particular case, a better confidence interval for the median is the one delimited by the 2nd and 5th order statistics, which contains the population median with probability :\left +\right1/2)^ = \approx 78\%. With such a small sample size, if one wants at least 95% confidence, one is reduced to saying that the median is between the minimum and the maximum of the 6 observations with probability 31/32 or approximately 97%. Size 6 is, in fact, the smallest sample size such that the interval determined by the minimum and the maximum is at least a 95% confidence interval for the population median.


Large sample sizes

For the uniform distribution, as ''n'' tends to infinity, the ''p''th sample quantile is asymptotically normally distributed, since it is approximated by : U_ \sim AN\left(p,\frac\right). For a general distribution ''F'' with a continuous non-zero density at ''F'' −1(''p''), a similar asymptotic normality applies: : X_ \sim AN\left(F^(p),\frac\right) where ''f'' is the
density function In probability theory, a probability density function (PDF), or density of a continuous random variable, is a function whose value at any given sample (or point) in the sample space (the set of possible values taken by the random variable) can ...
, and ''F'' −1 is the quantile function associated with ''F''. One of the first people to mention and prove this result was
Frederick Mosteller Charles Frederick Mosteller (December 24, 1916 – July 23, 2006) was an American mathematician, considered one of the most eminent statisticians of the 20th century. He was the founding chairman of Harvard's statistics department from 19 ...
in his seminal paper in 1946. Further research led in the 1960s to the Bahadur representation which provides information about the errorbounds. An interesting observation can be made in the case where the distribution is symmetric, and the population median equals the population mean. In this case, the
sample mean The sample mean (or "empirical mean") and the sample covariance are statistics computed from a sample of data on one or more random variables. The sample mean is the average value (or mean value) of a sample of numbers taken from a larger popu ...
, by the
central limit theorem In probability theory, the central limit theorem (CLT) establishes that, in many situations, when independent random variables are summed up, their properly normalized sum tends toward a normal distribution even if the original variables themsel ...
, is also asymptotically normally distributed, but with variance σ2''/n'' instead. This asymptotic analysis suggests that the mean outperforms the median in cases of low
kurtosis In probability theory and statistics, kurtosis (from el, κυρτός, ''kyrtos'' or ''kurtos'', meaning "curved, arching") is a measure of the "tailedness" of the probability distribution of a real-valued random variable. Like skewness, kurt ...
, and vice versa. For example, the median achieves better confidence intervals for the Laplace distribution, while the mean performs better for ''X'' that are normally distributed.


Proof

It can be shown that : B(k,n+1-k)\ \stackrel\ \frac, where : X = \sum_^ Z_i, \quad Y = \sum_^ Z_i, with ''Zi'' being independent identically distributed
exponential Exponential may refer to any of several mathematical topics related to exponentiation, including: *Exponential function, also: **Matrix exponential, the matrix analogue to the above *Exponential decay, decrease at a rate proportional to value *Expo ...
random variables with rate 1. Since ''X''/''n'' and ''Y''/''n'' are asymptotically normally distributed by the CLT, our results follow by application of the
delta method In statistics, the delta method is a result concerning the approximate probability distribution for a function of an asymptotically normal statistical estimator from knowledge of the limiting variance of that estimator. History The delta meth ...
.


Application: Non-parametric density estimation

Moments of the distribution for the first order statistic can be used to develop a non-parametric density estimator. Suppose, we want to estimate the density f_ at the point x^*. Consider the random variables Y_i = , X_i - x^*, , which are i.i.d with distribution function g_Y(y) = f_X(y + x^*) + f_X(x^* - y). In particular, f_X(x^*) = \frac. The expected value of the first order statistic Y_ given a sample of N total observations yields, : E(Y_) = \frac + \frac \int_^ Q''(z) \delta_(z) \, dz where Q is the quantile function associated with the distribution g_, and \delta_N(z) = (N+1)(1-z)^N. This equation in combination with a
jackknifing Jackknifing is the folding of an articulated vehicle so that it resembles the acute angle of a folding pocket knife. If a vehicle towing a trailer skids, the trailer can push the towing vehicle from behind until it spins the vehicle around and ...
technique becomes the basis for the following density estimation algorithm, Input: A sample of N observations. \_^M points of density evaluation. Tuning parameter a \in (0,1) (usually 1/3). Output: \_^M estimated density at the points of evaluation. 1: Set m_N = \operatorname(N^) 2: Set s_N = \frac 3: Create an s_N \times m_N matrix M_ which holds m_N subsets with s_N observations each. 4: Create a vector \hat to hold the density evaluations. 5: for \ell = 1 \to M do 6: for k = 1 \to m_N do 7: Find the nearest distance d_ to the current point x_\ell within the kth subset 8: end for 9: Compute the subset average of distances to x_\ell:d_\ell = \sum_^ \frac 10: Compute the density estimate at x_\ell:\hat_\ell = \frac 11: end for 12: return \hat In contrast to the bandwidth/length based tuning parameters for histogram and
kernel Kernel may refer to: Computing * Kernel (operating system), the central component of most operating systems * Kernel (image processing), a matrix used for image convolution * Compute kernel, in GPGPU programming * Kernel method, in machine learn ...
based approaches, the tuning parameter for the order statistic based density estimator is the size of sample subsets. Such an estimator is more robust than histogram and kernel based approaches, for example densities like the Cauchy distribution (which lack finite moments) can be inferred without the need for specialized modifications such as IQR based bandwidths. This is because the first moment of the order statistic always exists if the expected value of the underlying distribution does, but the converse is not necessarily true.


Dealing with discrete variables

Suppose X_1,X_2,\ldots,X_n are i.i.d. random variables from a discrete distribution with cumulative distribution function F(x) and probability mass function f(x). To find the probabilities of the k^\text order statistics, three values are first needed, namely :p_1=P(Xx)=1-F(x). The cumulative distribution function of the k^\text order statistic can be computed by noting that : \begin P(X_\leq x)& =P(\textk\textx) ,\\ & =P(\textn-k\textx) ,\\ & =\sum_^p_3^j(p_1+p_2)^ . \end Similarly, P(X_ is given by : \begin P(X_< x)& =P(\textk\textx) ,\\ & =P(\textn-k\textx) ,\\ & =\sum_^(p_2+p_3)^j(p_1)^ . \end Note that the probability mass function of X_ is just the difference of these values, that is to say : \begin P(X_=x)&=P(X_\leq x)-P(X_< x) ,\\ &=\sum_^\left(p_3^j(p_1+p_2)^-(p_2+p_3)^j(p_1)^\right) ,\\ &=\sum_^\left((1-F(x))^j(F(x))^-(1-F(x)+f(x))^j(F(x)-f(x))^\right). \end


Computing order statistics

The problem of computing the ''k''th smallest (or largest) element of a list is called the selection problem and is solved by a selection algorithm. Although this problem is difficult for very large lists, sophisticated selection algorithms have been created that can solve this problem in time proportional to the number of elements in the list, even if the list is totally unordered. If the data is stored in certain specialized data structures, this time can be brought down to O(log ''n''). In many applications all order statistics are required, in which case a
sorting algorithm In computer science, a sorting algorithm is an algorithm that puts elements of a list into an order. The most frequently used orders are numerical order and lexicographical order, and either ascending or descending. Efficient sorting is important ...
can be used and the time taken is O(''n'' log ''n'').


See also

*
Rankit In statistics, rankits of a set of data are the expected values of the order statistics of a sample from the standard normal distribution the same size as the data. They are primarily used in the normal probability plot, a graphical technique for ...
*
Box plot In descriptive statistics, a box plot or boxplot is a method for graphically demonstrating the locality, spread and skewness groups of numerical data through their quartiles. In addition to the box on a box plot, there can be lines (which are ca ...
* BRS-inequality *
Concomitant (statistics) In statistics, the concept of a concomitant, also called the induced order statistic, arises when one sorts the members of a random sample according to corresponding values of another random sample. Let (''X'i'', ''Y'i''), ''i'' =&n ...
* Fisher–Tippett distribution * Bapat–Beg theorem for the order statistics of independent but not necessarily identically distributed random variables *
Bernstein polynomial In the mathematical field of numerical analysis, a Bernstein polynomial is a polynomial that is a linear combination of Bernstein basis polynomials. The idea is named after Sergei Natanovich Bernstein. A numerically stable way to evaluate pol ...
*
L-estimator In statistics, an L-estimator is an estimator which is a linear combination of order statistics of the measurements (which is also called an L-statistic). This can be as little as a single point, as in the median (of an odd number of values), or a ...
– linear combinations of order statistics * Rank-size distribution *
Selection algorithm In computer science, a selection algorithm is an algorithm for finding the ''k''th smallest number in a list or array; such a number is called the ''k''th ''order statistic''. This includes the cases of finding the minimum, maximum, and median e ...


Examples of order statistics

* Sample maximum and minimum *
Quantile In statistics and probability, quantiles are cut points dividing the range of a probability distribution into continuous intervals with equal probabilities, or dividing the observations in a sample in the same way. There is one fewer quantile th ...
*
Percentile In statistics, a ''k''-th percentile (percentile score or centile) is a score ''below which'' a given percentage ''k'' of scores in its frequency distribution falls (exclusive definition) or a score ''at or below which'' a given percentage fal ...
*
Decile In descriptive statistics, a decile is any of the nine values that divide the sorted data into ten equal parts, so that each part represents 1/10 of the sample or population. A decile is one possible form of a quantile; others include the quartile ...
* Quartile * Median


References


External links

* Retrieved Feb 02,2005 * Retrieved Feb 02,2005 * C++ sourc
Dynamic Order Statistics
{{DEFAULTSORT:Order Statistic Nonparametric statistics Summary statistics Permutations