In
probability theory
Probability theory or probability calculus is the branch of mathematics concerned with probability. Although there are several different probability interpretations, probability theory treats the concept in a rigorous mathematical manner by expre ...
and
statistics
Statistics (from German language, German: ', "description of a State (polity), state, a country") is the discipline that concerns the collection, organization, analysis, interpretation, and presentation of data. In applying statistics to a s ...
, the multivariate normal distribution, multivariate Gaussian distribution, or joint normal distribution is a generalization of the one-dimensional (
univariate)
normal distribution
In probability theory and statistics, a normal distribution or Gaussian distribution is a type of continuous probability distribution for a real-valued random variable. The general form of its probability density function is
f(x) = \frac ...
to higher
dimension
In physics and mathematics, the dimension of a mathematical space (or object) is informally defined as the minimum number of coordinates needed to specify any point within it. Thus, a line has a dimension of one (1D) because only one coo ...
s. One definition is that a
random vector is said to be ''k''-variate normally distributed if every
linear combination
In mathematics, a linear combination or superposition is an Expression (mathematics), expression constructed from a Set (mathematics), set of terms by multiplying each term by a constant and adding the results (e.g. a linear combination of ''x'' a ...
of its ''k'' components has a univariate normal distribution. Its importance derives mainly from the
multivariate central limit theorem. The multivariate normal distribution is often used to describe, at least approximately, any set of (possibly)
correlated real-valued
random variable
A random variable (also called random quantity, aleatory variable, or stochastic variable) is a Mathematics, mathematical formalization of a quantity or object which depends on randomness, random events. The term 'random variable' in its mathema ...
s, each of which clusters around a mean value.
Definitions
Notation and parametrization
The multivariate normal distribution of a ''k''-dimensional random vector
can be written in the following notation:
:
or to make it explicitly known that
is ''k''-dimensional,
:
with ''k''-dimensional
mean vector
:
and
covariance matrix
:
such that
and
. The
inverse of the covariance matrix is called the
precision matrix, denoted by
.
Standard normal random vector
A real
random vector is called a standard normal random vector if all of its components
are independent and each is a zero-mean unit-variance normally distributed random variable, i.e. if
for all
.
Centered normal random vector
A real random vector
is called a centered normal random vector if there exists a
matrix
such that
has the same distribution as
where
is a standard normal random vector with
components.
[
]
Normal random vector
A real random vector is called a normal random vector if there exists a random -vector , which is a standard normal random vector, a -vector , and a matrix , such that .[
Formally:
Here the covariance matrix is .
In the degenerate case where the covariance matrix is singular, the corresponding distribution has no density; see the section below for details. This case arises frequently in ]statistics
Statistics (from German language, German: ', "description of a State (polity), state, a country") is the discipline that concerns the collection, organization, analysis, interpretation, and presentation of data. In applying statistics to a s ...
; for example, in the distribution of the vector of residuals in the ordinary least squares regression. The are in general ''not'' independent; they can be seen as the result of applying the matrix to a collection of independent Gaussian variables .
Equivalent definitions
The following definitions are equivalent to the definition given above. A random vector has a multivariate normal distribution if it satisfies one of the following equivalent conditions.
* Every linear combination of its components is normally distributed. That is, for any constant vector , the random variable has a univariate normal distribution, where a univariate normal distribution with zero variance is a point mass on its mean.
* There is a ''k''-vector and a symmetric, positive semidefinite matrix , such that the characteristic function of is
The spherical normal distribution can be characterised as the unique distribution where components are independent in any orthogonal coordinate system.
Density function
Non-degenerate case
The multivariate normal distribution is said to be "non-degenerate" when the symmetric covariance matrix is positive definite. In this case the distribution has density
where is a real ''k''-dimensional column vector and is the determinant
In mathematics, the determinant is a Scalar (mathematics), scalar-valued function (mathematics), function of the entries of a square matrix. The determinant of a matrix is commonly denoted , , or . Its value characterizes some properties of the ...
of , also known as the generalized variance. The equation above reduces to that of the univariate normal distribution if is a matrix (i.e., a single real number).
The circularly symmetric version of the complex normal distribution has a slightly different form.
Each iso-density locus — the locus of points in ''k''-dimensional space each of which gives the same particular value of the density — is an ellipse
In mathematics, an ellipse is a plane curve surrounding two focus (geometry), focal points, such that for all points on the curve, the sum of the two distances to the focal points is a constant. It generalizes a circle, which is the special ty ...
or its higher-dimensional generalization; hence the multivariate normal is a special case of the elliptical distributions.
The quantity is known as the Mahalanobis distance, which represents the distance of the test point from the mean .
The squared Mahalanobis distance
is decomposed into a sum of ''k'' terms, each term being a product of three meaningful components.
Note that in the case when , the distribution reduces to a univariate normal distribution and the Mahalanobis distance reduces to the absolute value of the standard score
In statistics, the standard score or ''z''-score is the number of standard deviations by which the value of a raw score (i.e., an observed value or data point) is above or below the mean value of what is being observed or measured. Raw scores ...
. See also Interval below.
Bivariate case
In the 2-dimensional nonsingular case (), the probability density function of a vector is:
where is the correlation between and and
where and . In this case,
:
In the bivariate case, the first equivalent condition for multivariate reconstruction of normality can be made less restrictive as it is sufficient to verify that a countably infinite set of distinct linear combinations of and are normal in order to conclude that the vector of is bivariate normal.[
The bivariate iso-density loci plotted in the -plane are ]ellipse
In mathematics, an ellipse is a plane curve surrounding two focus (geometry), focal points, such that for all points on the curve, the sum of the two distances to the focal points is a constant. It generalizes a circle, which is the special ty ...
s, whose principal axes are defined by the eigenvectors of the covariance matrix (the major and minor semidiameters of the ellipse equal the square-root of the ordered eigenvalues).
As the absolute value of the correlation parameter increases, these loci are squeezed toward the following line :
:
This is because this expression, with (where sgn is the sign function) replaced by , is the best linear unbiased prediction of given a value of .[
]
Degenerate case
If the covariance matrix is not full rank, then the multivariate normal distribution is degenerate and does not have a density. More precisely, it does not have a density with respect to ''k''-dimensional Lebesgue measure
In measure theory, a branch of mathematics, the Lebesgue measure, named after French mathematician Henri Lebesgue, is the standard way of assigning a measure to subsets of higher dimensional Euclidean '-spaces. For lower dimensions or , it c ...
(which is the usual measure assumed in calculus-level probability courses). Only random vectors whose distributions are absolutely continuous with respect to a measure are said to have densities (with respect to that measure). To talk about densities but avoid dealing with measure-theoretic complications it can be simpler to restrict attention to a subset of of the coordinates of such that the covariance matrix for this subset is positive definite; then the other coordinates may be thought of as an affine function
In Euclidean geometry, an affine transformation or affinity (from the Latin, ''wikt:affine, affinis'', "connected with") is a geometric transformation that preserves line (geometry), lines and parallel (geometry), parallelism, but not necessarily ...
of these selected coordinates.
To talk about densities meaningfully in singular cases, then, we must select a different base measure. Using the disintegration theorem we can define a restriction of Lebesgue measure to the -dimensional affine subspace of where the Gaussian distribution is supported, i.e. . With respect to this measure the distribution has the density of the following motif:
:
where is the generalized inverse
In mathematics, and in particular, algebra, a generalized inverse (or, g-inverse) of an element ''x'' is an element ''y'' that has some properties of an inverse element but not necessarily all of them. The purpose of constructing a generalized inv ...
and is the pseudo-determinant.[
]
Cumulative distribution function
The notion of cumulative distribution function (cdf) in dimension 1 can be extended in two ways to the multidimensional case, based on rectangular and ellipsoidal regions.
The first way is to define the cdf of a random vector as the probability that all components of are less than or equal to the corresponding values in the vector :
:
Though there is no closed form for , there are a number of algorithms that estimate it numerically.
Another way is to define the cdf as the probability that a sample lies inside the ellipsoid determined by its Mahalanobis distance from the Gaussian, a direct generalization of the standard deviation.[Bensimhoun Michael, ''N-Dimensional Cumulative Function, And Other Useful Facts About Gaussians and Normal Densities'' (2006)]
/ref>
In order to compute the values of this function, closed analytic formula exist, as follows.
Interval
The interval for the multivariate normal distribution yields a region consisting of those vectors x satisfying
:
Here is a -dimensional vector, is the known -dimensional mean vector, is the known covariance matrix and is the quantile function for probability of the chi-squared distribution with degrees of freedom.[
When the expression defines the interior of an ellipse and the chi-squared distribution simplifies to an exponential distribution with mean equal to two (rate equal to half).
]
Complementary cumulative distribution function (tail distribution)
The complementary cumulative distribution function (ccdf) or the tail distribution
is defined as .
When , then
the ccdf can be written as a probability the maximum of dependent Gaussian variables:[
]
:
While no simple closed formula exists for computing the ccdf, the maximum of dependent Gaussian variables can
be estimated accurately via the Monte Carlo method
Monte Carlo methods, or Monte Carlo experiments, are a broad class of computational algorithms that rely on repeated random sampling to obtain numerical results. The underlying concept is to use randomness to solve problems that might be ...
.[
]
Properties
Probability in different domains
The probability content of the multivariate normal in a quadratic domain defined by (where is a matrix, is a vector, and is a scalar), which is relevant for Bayesian classification/decision theory using Gaussian discriminant analysis, is given by the generalized chi-squared distribution.
The probability content within any general domain defined by (where is a general function) can be computed using the numerical method of ray-tracing
Matlab code
.
Higher moments
The ''k''th-order moments of x are given by
:
where
The ''k''th-order central moments are as follows
where the sum is taken over all allocations of the set into ''λ'' (unordered) pairs. That is, for a ''k''th central moment, one sums the products of covariances (the expected value ''μ'' is taken to be 0 in the interests of parsimony):
:
This yields terms in the sum (15 in the above case), each being the product of ''λ'' (in this case 3) covariances. For fourth order moments (four variables) there are three terms. For sixth-order moments there are terms, and for eighth-order moments there are terms.
The covariances are then determined by replacing the terms of the list