HOME

TheInfoList



OR:

In
statistics Statistics (from German: '' Statistik'', "description of a state, a country") is the discipline that concerns the collection, organization, analysis, interpretation, and presentation of data. In applying statistics to a scientific, indust ...
, the matrix normal distribution or matrix Gaussian distribution is a
probability distribution In probability theory and statistics, a probability distribution is the mathematical function that gives the probabilities of occurrence of different possible outcomes for an experiment. It is a mathematical description of a random phenomenon ...
that is a generalization of the
multivariate normal distribution In probability theory and statistics, the multivariate normal distribution, multivariate Gaussian distribution, or joint normal distribution is a generalization of the one-dimensional ( univariate) normal distribution to higher dimensions. One ...
to matrix-valued random variables.


Definition

The
probability density function In probability theory, a probability density function (PDF), or density of a continuous random variable, is a function whose value at any given sample (or point) in the sample space (the set of possible values taken by the random variable) ca ...
for the random matrix X (''n'' × ''p'') that follows the matrix normal distribution \mathcal_(\mathbf, \mathbf, \mathbf) has the form: : p(\mathbf\mid\mathbf, \mathbf, \mathbf) = \frac where \mathrm denotes trace and M is ''n'' × ''p'', U is ''n'' × ''n'' and V is ''p'' × ''p'', and the density is understood as the probability density function with respect to the standard Lebesgue measure in \mathbb^, i.e.: the measure corresponding to integration with respect to dx_ dx_\dots dx_ dx_\dots dx_\dots dx_. The matrix normal is related to the
multivariate normal distribution In probability theory and statistics, the multivariate normal distribution, multivariate Gaussian distribution, or joint normal distribution is a generalization of the one-dimensional ( univariate) normal distribution to higher dimensions. One ...
in the following way: :\mathbf \sim \mathcal_(\mathbf, \mathbf, \mathbf), if and only if :\mathrm(\mathbf) \sim \mathcal_(\mathrm(\mathbf), \mathbf \otimes \mathbf) where \otimes denotes the
Kronecker product In mathematics, the Kronecker product, sometimes denoted by ⊗, is an operation on two matrices of arbitrary size resulting in a block matrix. It is a generalization of the outer product (which is denoted by the same symbol) from vectors to ...
and \mathrm(\mathbf) denotes the vectorization of \mathbf.


Proof

The equivalence between the above ''matrix normal'' and ''multivariate normal'' density functions can be shown using several properties of the trace and
Kronecker product In mathematics, the Kronecker product, sometimes denoted by ⊗, is an operation on two matrices of arbitrary size resulting in a block matrix. It is a generalization of the outer product (which is denoted by the same symbol) from vectors to ...
, as follows. We start with the argument of the exponent of the matrix normal PDF: :\begin &\;\;\;\;-\frac12\text\left \mathbf^ (\mathbf - \mathbf)^ \mathbf^ (\mathbf - \mathbf) \right\ &= -\frac12\text\left(\mathbf - \mathbf\right)^T \text\left(\mathbf^ (\mathbf - \mathbf) \mathbf^\right) \\ &= -\frac12\text\left(\mathbf - \mathbf\right)^T \left(\mathbf^\otimes\mathbf^\right)\text\left(\mathbf - \mathbf\right) \\ &= -\frac12\left text(\mathbf) - \text(\mathbf)\rightT \left(\mathbf\otimes\mathbf\right)^\left text(\mathbf) - \text(\mathbf)\right \end which is the argument of the exponent of the multivariate normal PDF with respect to Lebesgue measure in \mathbb^. The proof is completed by using the determinant property: , \mathbf\otimes \mathbf, = , \mathbf, ^n , \mathbf, ^p.


Properties

If \mathbf \sim \mathcal_(\mathbf, \mathbf, \mathbf), then we have the following properties:


Expected values

The mean, or
expected value In probability theory, the expected value (also called expectation, expectancy, mathematical expectation, mean, average, or first moment) is a generalization of the weighted average. Informally, the expected value is the arithmetic mean of a ...
is: :E mathbf= \mathbf and we have the following second-order expectations: :E \mathbf - \mathbf)(\mathbf - \mathbf)^= \mathbf\operatorname(\mathbf) :E \mathbf - \mathbf)^ (\mathbf - \mathbf)= \mathbf\operatorname(\mathbf) where \operatorname denotes trace. More generally, for appropriately dimensioned matrices A,B,C: :\begin E mathbf\mathbf\mathbf^&= \mathbf\operatorname(\mathbf^T\mathbf) + \mathbf^T\\ E mathbf^T\mathbf\mathbf&= \mathbf\operatorname(\mathbf\mathbf^T) + \mathbf^T\mathbf\\ E mathbf\mathbf\mathbf&= \mathbf\mathbf^T\mathbf + \mathbf \end


Transformation

Transpose In linear algebra, the transpose of a matrix is an operator which flips a matrix over its diagonal; that is, it switches the row and column indices of the matrix by producing another matrix, often denoted by (among other notations). The tr ...
transform: :\mathbf^T \sim \mathcal_(\mathbf^T, \mathbf, \mathbf) Linear transform: let D (''r''-by-''n''), be of full
rank Rank is the relative position, value, worth, complexity, power, importance, authority, level, etc. of a person or object within a ranking, such as: Level or position in a hierarchical organization * Academic rank * Diplomatic rank * Hierarchy * ...
''r ≤ n'' and C (''p''-by-''s''), be of full rank ''s ≤ p'', then: :\mathbf\sim \mathcal_(\mathbf, \mathbf^T, \mathbf^T\mathbf)


Example

Let's imagine a sample of ''n'' independent ''p''-dimensional random variables identically distributed according to a
multivariate normal distribution In probability theory and statistics, the multivariate normal distribution, multivariate Gaussian distribution, or joint normal distribution is a generalization of the one-dimensional ( univariate) normal distribution to higher dimensions. One ...
: :\mathbf_i \sim \mathcal_p(, ) \text i \in \. When defining the ''n'' × ''p'' matrix \mathbf for which the ''i''th row is \mathbf_i, we obtain: :\mathbf \sim \mathcal_(\mathbf, \mathbf, \mathbf) where each row of \mathbf is equal to , that is \mathbf=\mathbf_n \times ^T, \mathbf is the ''n'' × ''n'' identity matrix, that is the rows are independent, and \mathbf = .


Maximum likelihood parameter estimation

Given ''k'' matrices, each of size ''n'' × ''p'', denoted \mathbf_1, \mathbf_2, \ldots, \mathbf_k, which we assume have been sampled i.i.d. from a matrix normal distribution, the maximum likelihood estimate of the parameters can be obtained by maximizing: : \prod_^k \mathcal_(\mathbf_i\mid\mathbf,\mathbf,\mathbf). The solution for the mean has a closed form, namely : \mathbf = \frac \sum_^k\mathbf_i but the covariance parameters do not. However, these parameters can be iteratively maximized by zero-ing their gradients at: : \mathbf = \frac \sum_^k(\mathbf_i-\mathbf)\mathbf^(\mathbf_i-\mathbf)^T and : \mathbf = \frac \sum_^k(\mathbf_i-\mathbf)^T\mathbf^(\mathbf_i-\mathbf), See for example and references therein. The covariance parameters are non-identifiable in the sense that for any scale factor, ''s''>0, we have: : \mathcal_(\mathbf\mid\mathbf,\mathbf,\mathbf) = \mathcal_(\mathbf\mid\mathbf,s\mathbf,\tfrac\mathbf) .


Drawing values from the distribution

Sampling from the matrix normal distribution is a special case of the sampling procedure for the
multivariate normal distribution In probability theory and statistics, the multivariate normal distribution, multivariate Gaussian distribution, or joint normal distribution is a generalization of the one-dimensional ( univariate) normal distribution to higher dimensions. One ...
. Let \mathbf be an ''n'' by ''p'' matrix of ''np'' independent samples from the standard normal distribution, so that : \mathbf\sim\mathcal_(\mathbf,\mathbf,\mathbf). Then let : \mathbf=\mathbf+\mathbf\mathbf\mathbf, so that : \mathbf\sim\mathcal_(\mathbf,\mathbf^T,\mathbf^T\mathbf), where A and B can be chosen by Cholesky decomposition or a similar matrix square root operation.


Relation to other distributions

Dawid (1981) provides a discussion of the relation of the matrix-valued normal distribution to other distributions, including the
Wishart distribution In statistics, the Wishart distribution is a generalization to multiple dimensions of the gamma distribution. It is named in honor of John Wishart, who first formulated the distribution in 1928. It is a family of probability distributions defi ...
, inverse-Wishart distribution and matrix t-distribution, but uses different notation from that employed here.


See also

*
Multivariate normal distribution In probability theory and statistics, the multivariate normal distribution, multivariate Gaussian distribution, or joint normal distribution is a generalization of the one-dimensional ( univariate) normal distribution to higher dimensions. One ...


References

* * * {{ProbDistributions, multivariate Random matrices Continuous distributions Multivariate continuous distributions