HOME
*





Von Mises–Fisher Distribution
In directional statistics, the von Mises–Fisher distribution (named after Richard von Mises and Ronald Fisher), is a probability distribution on the (p-1)-sphere in \mathbb^. If p=2 the distribution reduces to the von Mises distribution on the circle. Definition The probability density function of the von Mises–Fisher distribution for the random ''p''-dimensional unit vector \mathbf is given by: :f_(\mathbf; \boldsymbol, \kappa) = C_(\kappa) \exp \left( \right), where \kappa \ge 0, \left \Vert \boldsymbol \right \Vert = 1 and the normalization constant C_(\kappa) is equal to : C_(\kappa)=\frac , where I_ denotes the modified Bessel function of the first kind at order v. If p = 3, the normalization constant reduces to : C_(\kappa) = \frac = \frac . The parameters \boldsymbol and \kappa are called the ''mean direction'' and ''concentration parameter'', respectively. The greater the value of \kappa, the higher the concentration of the distribution around the mean ...
[...More Info...]      
[...Related Items...]     OR:     [Wikipedia]   [Google]   [Baidu]  


picture info

Directional Statistics
Directional statistics (also circular statistics or spherical statistics) is the subdiscipline of statistics that deals with directions (unit vectors in Euclidean space, R''n''), axes (lines through the origin in R''n'') or rotations in R''n''. More generally, directional statistics deals with observations on compact Riemannian manifolds including the Stiefel manifold. The fact that 0 degrees and 360 degrees are identical angles, so that for example 180 degrees is not a sensible mean of 2 degrees and 358 degrees, provides one illustration that special statistical methods are required for the analysis of some types of data (in this case, angular data). Other examples of data that may be regarded as directional include statistics involving temporal periods (e.g. time of day, week, month, year, etc.), compass directions, dihedral angles in molecules, orientations, rotations and so on. Circular distributions Any probability density function (pdf) \ p(x) on the line can be "wr ...
[...More Info...]      
[...Related Items...]     OR:     [Wikipedia]   [Google]   [Baidu]  


picture info

Lebesgue Measure
In measure theory, a branch of mathematics, the Lebesgue measure, named after French mathematician Henri Lebesgue, is the standard way of assigning a measure to subsets of ''n''-dimensional Euclidean space. For ''n'' = 1, 2, or 3, it coincides with the standard measure of length, area, or volume. In general, it is also called ''n''-dimensional volume, ''n''-volume, or simply volume. It is used throughout real analysis, in particular to define Lebesgue integration. Sets that can be assigned a Lebesgue measure are called Lebesgue-measurable; the measure of the Lebesgue-measurable set ''A'' is here denoted by ''λ''(''A''). Henri Lebesgue described this measure in the year 1901, followed the next year by his description of the Lebesgue integral. Both were published as part of his dissertation in 1902. Definition For any interval I = ,b/math>, or I = (a, b), in the set \mathbb of real numbers, let \ell(I)= b - a denote its length. For any subset E\subseteq\mathbb, the Lebesgue ...
[...More Info...]      
[...Related Items...]     OR:     [Wikipedia]   [Google]   [Baidu]  


Sufficient Statistic
In statistics, a statistic is ''sufficient'' with respect to a statistical model and its associated unknown parameter if "no other statistic that can be calculated from the same sample provides any additional information as to the value of the parameter". In particular, a statistic is sufficient for a family of probability distributions if the sample from which it is calculated gives no additional information than the statistic, as to which of those probability distributions is the sampling distribution. A related concept is that of linear sufficiency, which is weaker than ''sufficiency'' but can be applied in some cases where there is no sufficient statistic, although it is restricted to linear estimators. The Kolmogorov structure function deals with individual finite data; the related notion there is the algorithmic sufficient statistic. The concept is due to Sir Ronald Fisher in 1920. Stephen Stigler noted in 1973 that the concept of sufficiency had fallen out of favor in de ...
[...More Info...]      
[...Related Items...]     OR:     [Wikipedia]   [Google]   [Baidu]  


picture info

Arithmetic Mean
In mathematics and statistics, the arithmetic mean ( ) or arithmetic average, or just the '' mean'' or the ''average'' (when the context is clear), is the sum of a collection of numbers divided by the count of numbers in the collection. The collection is often a set of results of an experiment or an observational study, or frequently a set of results from a survey. The term "arithmetic mean" is preferred in some contexts in mathematics and statistics, because it helps distinguish it from other means, such as the geometric mean and the harmonic mean. In addition to mathematics and statistics, the arithmetic mean is used frequently in many diverse fields such as economics, anthropology and history, and it is used in almost every academic field to some extent. For example, per capita income is the arithmetic average income of a nation's population. While the arithmetic mean is often used to report central tendencies, it is not a robust statistic, meaning that it is great ...
[...More Info...]      
[...Related Items...]     OR:     [Wikipedia]   [Google]   [Baidu]  


picture info

Maximum Likelihood
In statistics, maximum likelihood estimation (MLE) is a method of estimating the parameters of an assumed probability distribution, given some observed data. This is achieved by maximizing a likelihood function so that, under the assumed statistical model, the observed data is most probable. The point in the parameter space that maximizes the likelihood function is called the maximum likelihood estimate. The logic of maximum likelihood is both intuitive and flexible, and as such the method has become a dominant means of statistical inference. If the likelihood function is differentiable, the derivative test for finding maxima can be applied. In some cases, the first-order conditions of the likelihood function can be solved analytically; for instance, the ordinary least squares estimator for a linear regression model maximizes the likelihood when all observed outcomes are assumed to have Normal distributions with the same variance. From the perspective of Bayesian infere ...
[...More Info...]      
[...Related Items...]     OR:     [Wikipedia]   [Google]   [Baidu]  


picture info

Unit Vector
In mathematics, a unit vector in a normed vector space is a vector (often a spatial vector) of length 1. A unit vector is often denoted by a lowercase letter with a circumflex, or "hat", as in \hat (pronounced "v-hat"). The term ''direction vector'', commonly denoted as d, is used to describe a unit vector being used to represent spatial direction and relative direction. 2D spatial directions are numerically equivalent to points on the unit circle and spatial directions in 3D are equivalent to a point on the unit sphere. The normalized vector û of a non-zero vector u is the unit vector in the direction of u, i.e., :\mathbf = \frac where , u, is the norm (or length) of u. The term ''normalized vector'' is sometimes used as a synonym for ''unit vector''. Unit vectors are often chosen to form the basis of a vector space, and every vector in the space may be written as a linear combination of unit vectors. Orthogonal coordinates Cartesian coordinates Unit vectors ...
[...More Info...]      
[...Related Items...]     OR:     [Wikipedia]   [Google]   [Baidu]  


picture info

Independence (probability Theory)
Independence is a fundamental notion in probability theory, as in statistics and the theory of stochastic processes. Two events are independent, statistically independent, or stochastically independent if, informally speaking, the occurrence of one does not affect the probability of occurrence of the other or, equivalently, does not affect the odds. Similarly, two random variables are independent if the realization of one does not affect the probability distribution of the other. When dealing with collections of more than two events, two notions of independence need to be distinguished. The events are called pairwise independent if any two events in the collection are independent of each other, while mutual independence (or collective independence) of events means, informally speaking, that each event is independent of any combination of other events in the collection. A similar notion exists for collections of random variables. Mutual independence implies pairwise independen ...
[...More Info...]      
[...Related Items...]     OR:     [Wikipedia]   [Google]   [Baidu]  


picture info

Fisher-Bingham Distribution
In directional statistics, the Kent distribution, also known as the 5-parameter Fisher–Bingham distribution (named after John T. Kent, Ronald Fisher, and Christopher Bingham), is a probability distribution on the unit sphere ( 2-sphere ''S''2 in 3-space R3). It is the analogue on ''S''2 of the bivariate normal distribution with an unconstrained covariance matrix. The Kent distribution was proposed by John T. Kent in 1982, and is used in geology as well as bioinformatics. Definition The probability density function f(\mathbf)\, of the Kent distribution is given by: : f(\mathbf)=\frac\exp\ where \mathbf\, is a three-dimensional unit vector, (\cdot)^ denotes the transpose of (\cdot), and the normalizing constant \textrm(\kappa,\beta)\, is: : c(\kappa,\beta)=2\pi\sum_^\infty\frac\beta^\left(\frac\kappa\right)^ I_(\kappa) Where I_v(\kappa) is the modified Bessel function and \Gamma(\cdot) is the gamma function. Note that c(0,0) = 4\pi and c(\kappa,0)=4\pi(\kappa^)\sin ...
[...More Info...]      
[...Related Items...]     OR:     [Wikipedia]   [Google]   [Baidu]  


picture info

Restriction (mathematics)
In mathematics, the restriction of a function f is a new function, denoted f\vert_A or f , obtained by choosing a smaller domain A for the original function f. The function f is then said to extend f\vert_A. Formal definition Let f : E \to F be a function from a set E to a set F. If a set A is a subset of E, then the restriction of f to A is the function _A : A \to F given by _A(x) = f(x) for x \in A. Informally, the restriction of f to A is the same function as f, but is only defined on A. If the function f is thought of as a relation (x,f(x)) on the Cartesian product E \times F, then the restriction of f to A can be represented by its graph where the pairs (x,f(x)) represent ordered pairs in the graph G. Extensions A function F is said to be an ' of another function f if whenever x is in the domain of f then x is also in the domain of F and f(x) = F(x). That is, if \operatorname f \subseteq \operatorname F and F\big\vert_ = f. A '' '' (respectively, '' '', etc.) ...
[...More Info...]      
[...Related Items...]     OR:     [Wikipedia]   [Google]   [Baidu]  


picture info

Covariance
In probability theory and statistics, covariance is a measure of the joint variability of two random variables. If the greater values of one variable mainly correspond with the greater values of the other variable, and the same holds for the lesser values (that is, the variables tend to show similar behavior), the covariance is positive. In the opposite case, when the greater values of one variable mainly correspond to the lesser values of the other, (that is, the variables tend to show opposite behavior), the covariance is negative. The sign of the covariance therefore shows the tendency in the linear relationship between the variables. The magnitude of the covariance is not easy to interpret because it is not normalized and hence depends on the magnitudes of the variables. The normalized version of the covariance, the correlation coefficient, however, shows by its magnitude the strength of the linear relation. A distinction must be made between (1) the covariance of two ra ...
[...More Info...]      
[...Related Items...]     OR:     [Wikipedia]   [Google]   [Baidu]  


Isotropic
Isotropy is uniformity in all orientations; it is derived . Precise definitions depend on the subject area. Exceptions, or inequalities, are frequently indicated by the prefix ' or ', hence '' anisotropy''. ''Anisotropy'' is also used to describe situations where properties vary systematically, dependent on direction. Isotropic radiation has the same intensity regardless of the direction of measurement, and an isotropic field exerts the same action regardless of how the test particle is oriented. Mathematics Within mathematics, ''isotropy'' has a few different meanings: ; Isotropic manifolds: A manifold is isotropic if the geometry on the manifold is the same regardless of direction. A similar concept is homogeneity. ; Isotropic quadratic form: A quadratic form ''q'' is said to be isotropic if there is a non-zero vector ''v'' such that ; such a ''v'' is an isotropic vector or null vector. In complex geometry, a line through the origin in the direction of an isotropic vecto ...
[...More Info...]      
[...Related Items...]     OR:     [Wikipedia]   [Google]   [Baidu]  


picture info

Normal Distribution
In statistics, a normal distribution or Gaussian distribution is a type of continuous probability distribution for a real-valued random variable. The general form of its probability density function is : f(x) = \frac e^ The parameter \mu is the mean or expectation of the distribution (and also its median and mode), while the parameter \sigma is its standard deviation. The variance of the distribution is \sigma^2. A random variable with a Gaussian distribution is said to be normally distributed, and is called a normal deviate. Normal distributions are important in statistics and are often used in the natural and social sciences to represent real-valued random variables whose distributions are not known. Their importance is partly due to the central limit theorem. It states that, under some conditions, the average of many samples (observations) of a random variable with finite mean and variance is itself a random variable—whose distribution converges to a normal d ...
[...More Info...]      
[...Related Items...]     OR:     [Wikipedia]   [Google]   [Baidu]