HOME

TheInfoList



OR:

In
probability theory Probability theory is the branch of mathematics concerned with probability. Although there are several different probability interpretations, probability theory treats the concept in a rigorous mathematical manner by expressing it through a set o ...
and directional statistics, a circular uniform distribution is a
probability distribution In probability theory and statistics, a probability distribution is the mathematical function that gives the probabilities of occurrence of different possible outcomes for an experiment. It is a mathematical description of a random phenomeno ...
on the unit circle whose density is uniform for all angles.


Description


Definition

The
probability density function In probability theory, a probability density function (PDF), or density of a continuous random variable, is a function whose value at any given sample (or point) in the sample space (the set of possible values taken by the random variable) c ...
(pdf) of the circular uniform distribution, e.g. with \theta\in[0,2\pi), is: : f_(\theta)=\frac.


Moments with respect to a parametrization

We consider the circular variable z=e^ with z=1 at base angle \theta=0. In these terms, the circular moments of the circular uniform distribution are all zero, except for m_0: :\langle z^n\rangle=\delta_n where \delta_n is the Kronecker delta symbol.


Descriptive statistics

Here the mean angle is undefined, and the length of the mean resultant is zero. : R=, \langle z^n\rangle, =0\,


Distribution of the mean

The sample mean of a set of ''N'' measurements z_n=e^ drawn from a circular uniform distribution is defined as: : \overline = \frac\sum_^N z_n = \overline+i\overline = \overlinee^ where the average sine and cosine are: : \overline=\frac\sum_^N \cos(\theta_n)\qquad\qquad\overline=\frac\sum_^N \sin(\theta_n) and the average resultant length is: : \overline^2=, \overline, ^2=\overline^2+\overline^2 and the mean angle is: : \overline=\mathrm(\overline). \, The sample mean for the circular uniform distribution will be concentrated about zero, becoming more concentrated as ''N'' increases. The distribution of the sample mean for the uniform distribution is given by: : \frac\int_\Gamma \prod_^N d\theta_n = P(\overline)P(\overline)\,d\overline\,d\overline where \Gamma\, consists of intervals of 2\pi in the variables, subject to the constraint that \overline and \overline are constant, or, alternatively, that \overline and \overline are constant. The distribution of the angleP(\overline) is uniform : P(\overline)=\frac and the distribution of \overline is given by: : P_N(\overline)=N^2\overline\int_0^\infty J_0(N\overline\,t)J_0(t)^Nt\,dt where J_0 is the
Bessel function Bessel functions, first defined by the mathematician Daniel Bernoulli and then generalized by Friedrich Bessel, are canonical solutions of Bessel's differential equation x^2 \frac + x \frac + \left(x^2 - \alpha^2 \right)y = 0 for an arbitrary ...
of order zero. There is no known general analytic solution for the above integral, and it is difficult to evaluate due to the large number of oscillations in the integrand. A 10,000 point Monte Carlo simulation of the distribution of the mean for N=3 is shown in the figure. For certain special cases, the above integral can be evaluated: : P_2(\overline)=\frac. For large ''N'', the distribution of the mean can be determined from the central limit theorem for directional statistics. Since the angles are uniformly distributed, the individual sines and cosines of the angles will be distributed as: : P(u)du=\frac\,\frac where u=\cos\theta_n\, or \sin\theta_n\,. It follows that they will have zero mean and a variance of 1/2. By the central limit theorem, in the limit of large ''N'', \overline\, and \overline\,, being the sum of a large number of
i.i.d In probability theory and statistics, a collection of random variables is independent and identically distributed if each random variable has the same probability distribution as the others and all are mutually independence (probability theory), ...
's, will be normally distributed with mean zero and variance 1/2N. The mean resultant length \overline\,, being the square root of the sum of squares of two normally distributed independent variables, will be Chi-distributed with two degrees of freedom (i.e. Rayleigh-distributed) and variance 1/2N: : \lim_P_N(\overline)=2N\overline\,e^.


Entropy

The differential
information entropy In information theory, the entropy of a random variable is the average level of "information", "surprise", or "uncertainty" inherent to the variable's possible outcomes. Given a discrete random variable X, which takes values in the alphabet \ ...
of the uniform distribution is simply :H_U=-\int_\Gamma \frac\ln\left(\frac\right)\,d\theta = \ln(2\pi) where \Gamma is any interval of length 2\pi. This is the maximum entropy any circular distribution may have.


See also

* Wrapped distribution


References

{{ProbDistributions, directional Continuous distributions Directional statistics