Wishart Distribution
   HOME

TheInfoList



OR:

In statistics, the Wishart distribution is a generalization of the gamma distribution to multiple dimensions. It is named in honor of John Wishart (statistician), John Wishart, who first formulated the distribution in 1928. Other names include Wishart ensemble (in random matrix theory, probability distributions over matrices are usually called "ensembles"), or Wishart–Laguerre ensemble (since its eigenvalue distribution involve Laguerre polynomials), or LOE, LUE, LSE (in analogy with Random matrix#Gaussian ensembles, GOE, GUE, GSE). It is a family of probability distributions defined over symmetric, positive-definite random matrices (i.e. matrix (mathematics), matrix-valued random variables). These distributions are of great importance in the estimation of covariance matrices in multivariate statistics. In Bayesian inference, Bayesian statistics, the Wishart distribution is the conjugate prior of the matrix inverse, inverse covariance matrix, covariance-matrix of a multivariate normal distribution, multivariate-normal random vector.


Definition

Suppose is a matrix, each column of which is statistical independence, independently drawn from a multivariate normal distribution, -variate normal distribution with zero mean: :G = (g_i^1,\dots,g_i^n) \sim \mathcal_p(0,V). Then the Wishart distribution is the probability distribution of the random matrix :S= G G^T = \sum_^n g_g_^T known as the scatter matrix. One indicates that has that probability distribution by writing :S\sim W_p(V,n). The positive integer is the number of ''degrees of freedom (statistics), degrees of freedom''. Sometimes this is written . For the matrix is invertible with probability if is invertible. If then this distribution is a chi-squared distribution with degrees of freedom.


Occurrence

The Wishart distribution arises as the distribution of the sample covariance matrix for a sample from a multivariate normal distribution. It occurs frequently in likelihood-ratio tests in multivariate statistical analysis. It also arises in the spectral theory of Random matrix, random matrices and in multidimensional Bayesian analysis. It is also encountered in wireless communications, while analyzing the performance of Rayleigh fading MIMO wireless channels .


Probability density function

The Wishart distribution can be characterization (mathematics), characterized by its probability density function as follows: Let be a symmetric matrix of random variables that is Positive-definite matrix, positive semi-definite. Let be a (fixed) symmetric positive definite matrix of size . Then, if , has a Wishart distribution with degrees of freedom if it has the probability density function : f_ (\mathbf X) = \frac^ e^ where \left, \ is the determinant of \mathbf X and is the multivariate gamma function defined as :\Gamma_p \left (\frac n 2 \right )= \pi^\prod_^p \Gamma\left( \frac - \frac \right ). The density above is not the joint density of all the p^2 elements of the random matrix (such density does not exist because of the symmetry constrains X_=X_), it is rather the joint density of p(p+1)/2 elements X_ for i\le j (, page 38). Also, the density formula above applies only to positive definite matrices \mathbf x; for other matrices the density is equal to zero.


Spectral density

The joint-eigenvalue density for the eigenvalues \lambda_1,\dots , \lambda_p\ge 0 of a random matrix \mathbf\sim W_p(\mathbf,n) is, : c_e^\prod \lambda_i^\prod_, \lambda_i-\lambda_j, where c_ is a constant. In fact the above definition can be extended to any real . If , then the Wishart no longer has a density—instead it represents a singular distribution that takes values in a lower-dimension subspace of the space of matrices.


Use in Bayesian statistics

In Bayesian statistics, in the context of the multivariate normal distribution, the Wishart distribution is the conjugate prior to the Precision (statistics), precision matrix , where is the covariance matrix.


Choice of parameters

The least informative, proper Wishart prior is obtained by setting . A common choice for V leverages the fact that the mean of X ~''Wp''(V, ''n'') is ''n''V. Then V is chosen so that ''n''V equals an initial guess for X. For instance, when estimating a precision matrix Σ−1 ~ ''Wp''(V, ''n'') a reasonable choice for V would be ''n''−1Σ0−1, where Σ0 is some prior estimate for the covariance matrix Σ.


Properties


Log-expectation

The following formula plays a role in variational Bayes derivations for Bayes networks involving the Wishart distribution. From equation (2.63), :\operatorname[\, \ln\left, \mathbf\\, ] = \psi_p\left(\frac n 2\right) + p \, \ln(2) + \ln, \mathbf, where \psi_p is the multivariate digamma function (the derivative of the log of the multivariate gamma function).


Log-variance

The following variance computation could be of help in Bayesian statistics: : \operatorname\left[\, \ln\left, \mathbf\ \,\right]=\sum_^p \psi_1\left(\frac 2\right) where \psi_1 is the trigamma function. This comes up when computing the Fisher information of the Wishart random variable.


Entropy

The information entropy of the distribution has the following formula: :\operatorname\left[\, \mathbf \,\right] = -\ln \left( B(\mathbf,n) \right) -\frac \operatorname\left[\, \ln\left, \mathbf\\,\right] + \frac where is the normalizing constant of the distribution: :B(\mathbf,n) = \frac. This can be expanded as follows: : \begin \operatorname\left[\, \mathbf\, \right] & = \frac \ln \left, \mathbf\ +\frac \ln 2 + \ln \Gamma_p \left(\frac \right) - \frac \operatorname\left[\, \ln\left, \mathbf\\, \right] + \frac \\[8pt] &= \frac \ln\left, \mathbf\ + \frac \ln 2 + \ln\Gamma_p\left(\frac \right) - \frac 2 \left( \psi_p \left(\frac\right) + p\ln 2 + \ln\left, \mathbf\\right) + \frac \\[8pt] &= \frac \ln\left, \mathbf\ + \frac \ln 2 + \ln\Gamma_p\left(\frac n 2\right) - \frac \psi_p\left(\frac n 2 \right) - \frac 2 \left(p\ln 2 +\ln\left, \mathbf\ \right) + \frac \\[8pt] &= \frac \ln\left, \mathbf\ + \frac1 2 p(p+1) \ln 2 + \ln\Gamma_p\left(\frac n 2\right) - \frac \psi_p\left(\frac n 2 \right) + \frac \end


Cross-entropy

The cross-entropy of two Wishart distributions p_0 with parameters n_0, V_0 and p_1 with parameters n_1, V_1 is :\begin H(p_0, p_1) &= \operatorname_[\, -\log p_1\, ]\\[8pt] &= \operatorname_ \left[\, -\log \frac \right]\\[8pt] &= \tfrac 2 \log 2 + \tfrac 2 \log \left, \mathbf_1\ + \log \Gamma_(\tfrac 2) - \tfrac 2 \operatorname_\left[\, \log\left, \mathbf\\, \right] + \tfrac\operatorname_\left[\, \operatorname\left(\,\mathbf_1^\mathbf\,\right) \, \right] \\[8pt] &= \tfrac \log 2 + \tfrac 2 \log \left, \mathbf_1\ + \log \Gamma_(\tfrac) - \tfrac \left( \psi_(\tfrac 2) + p_0 \log 2 + \log \left, \mathbf_0\\right)+ \tfrac \operatorname\left(\, \mathbf_1^ n_0 \mathbf_0\, \right) \\[8pt] &=-\tfrac \log \left, \, \mathbf_1^ \mathbf_0\, \ + \tfrac 2 \log \left, \mathbf_0\ + \tfrac 2 \operatorname\left(\, \mathbf_1^ \mathbf_0\right)+ \log \Gamma_\left(\tfrac\right) - \tfrac \psi_(\tfrac) + \tfrac \log 2 \end Note that when p_0=p_1 and n_0=n_1 we recover the entropy.


KL-divergence

The Kullback–Leibler divergence of p_1 from p_0 is : \begin D_(p_0 \, p_1) & = H(p_0, p_1) - H(p_0) \\[6pt] & =-\frac 2 \log , \mathbf_1^ \mathbf_0, + \frac(\operatorname(\mathbf_1^ \mathbf_0) - p)+ \log \frac + \tfrac 2 \psi_p\left(\frac 2\right) \end


Characteristic function

The characteristic function (probability theory), characteristic function of the Wishart distribution is :\Theta \mapsto \operatorname\left[ \, \exp\left( \,i \operatorname\left(\,\mathbf\,\right)\,\right)\, \right] = \left, \, 1 - 2i\, \,\, \^ where denotes expectation. (Here is any matrix with the same dimensions as , indicates the identity matrix, and is a square root of ). Properly interpreting this formula requires a little care, because noninteger complex powers are Riemann surface, multivalued; when is noninteger, the correct branch must be determined via analytic continuation.


Theorem

If a random matrix has a Wishart distribution with degrees of freedom and variance matrix — write \mathbf\sim\mathcal_p(,m) — and is a matrix of rank (matrix theory), rank , then :\mathbf\mathbf^T \sim \mathcal_q\left(^T,m\right).


Corollary 1

If is a nonzero constant vector, then: :\sigma_z^ \, ^T\mathbf \sim \chi_m^2. In this case, \chi_m^2 is the chi-squared distribution and \sigma_z^2=^T (note that \sigma_z^2 is a constant; it is positive because is positive definite).


Corollary 2

Consider the case where (that is, the -th element is one and all others zero). Then corollary 1 above shows that :\sigma_^ \, w_\sim \chi^2_m gives the marginal distribution of each of the elements on the matrix's diagonal. George Seber points out that the Wishart distribution is not called the “multivariate chi-squared distribution” because the marginal distribution of the off-diagonal elements is not chi-squared. Seber prefers to reserve the term multivariate statistics, multivariate for the case when all univariate marginals belong to the same family.


Estimator of the multivariate normal distribution

The Wishart distribution is the sampling distribution of the maximum likelihood, maximum-likelihood estimator (MLE) of the covariance matrix of a multivariate normal distribution. A estimation of covariance matrices, derivation of the MLE uses the spectral theorem.


Bartlett decomposition

The Bartlett decomposition of a matrix from a -variate Wishart distribution with scale matrix and degrees of freedom is the factorization: :\mathbf = ^T^T, where is the Cholesky decomposition, Cholesky factor of , and: :\mathbf A = \begin c_1 & 0 & 0 & \cdots & 0\\ n_ & c_2 &0 & \cdots& 0 \\ n_ & n_ & c_3 & \cdots & 0\\ \vdots & \vdots & \vdots &\ddots & \vdots \\ n_ & n_ & n_ &\cdots & c_p \end where c_i^2 \sim \chi^2_ and independently. This provides a useful method for obtaining random samples from a Wishart distribution.


Marginal distribution of matrix elements

Let be a variance matrix characterized by Pearson product-moment correlation coefficient, correlation coefficient and its lower Cholesky factor: :\mathbf = \begin \sigma_1^2 & \rho \sigma_1 \sigma_2 \\ \rho \sigma_1 \sigma_2 & \sigma_2^2 \end, \qquad \mathbf = \begin \sigma_1 & 0 \\ \rho \sigma_2 & \sqrt \sigma_2 \end Multiplying through the Bartlett decomposition above, we find that a random sample from the Wishart distribution is :\mathbf = \begin \sigma_1^2 c_1^2 & \sigma_1 \sigma_2 \left (\rho c_1^2 + \sqrt c_1 n_ \right ) \\ \sigma_1 \sigma_2 \left (\rho c_1^2 + \sqrt c_1 n_ \right ) & \sigma_2^2 \left(\left (1-\rho^2 \right ) c_2^2 + \left (\sqrt n_ + \rho c_1 \right )^2 \right) \end The diagonal elements, most evidently in the first element, follow the distribution with degrees of freedom (scaled by ) as expected. The off-diagonal element is less familiar but can be identified as a normal variance-mean mixture where the mixing density is a distribution. The corresponding marginal probability density for the off-diagonal element is therefore the variance-gamma distribution :f(x_) = \frac \cdot K_ \left(\frac\right) \exp where is the modified Bessel function of the second kind. Similar results may be found for higher dimensions. In general, if X follows a Wishart distribution with parameters, \Sigma, n, then for i \neq j , the off-diagonal elements : X_ \sim \text(n, \Sigma_, (\Sigma_ \Sigma_ - \Sigma_^2)^, 0). It is also possible to write down the moment-generating function even in the ''noncentral'' case (essentially the ''n''th power of Craig (1936) equation 10) although the probability density becomes an infinite sum of Bessel functions.


The range of the shape parameter

It can be shown that the Wishart distribution can be defined if and only if the shape parameter belongs to the set :\Lambda_p:=\\cup \left(p-1,\infty\right). This set is named after Simon Gindikin, who introduced it in the 1970s in the context of gamma distributions on homogeneous cones. However, for the new parameters in the discrete spectrum of the Gindikin ensemble, namely, :\Lambda_p^*:=\, the corresponding Wishart distribution has no Lebesgue density.


Relationships to other distributions

* The Wishart distribution is related to the inverse-Wishart distribution, denoted by W_p^, as follows: If and if we do the change of variables , then \mathbf\sim W_p^(\mathbf^,n). This relationship may be derived by noting that the absolute value of the Jacobian determinant of this change of variables is , see for example equation (15.15) in. * In Bayesian statistics, the Wishart distribution is a conjugate prior for the Precision (statistics), precision parameter of the multivariate normal distribution, when the mean parameter is known. * A generalization is the multivariate gamma distribution. * A different type of generalization is the normal-Wishart distribution, essentially the product of a multivariate normal distribution with a Wishart distribution.


See also

* Chi-squared distribution * Complex Wishart distribution * F-distribution * Gamma distribution * Hotelling's T-squared distribution * Inverse-Wishart distribution * Multivariate gamma distribution * Student's t-distribution * Wilks' lambda distribution


References


External links


A C++ library for random matrix generator
{{DEFAULTSORT:Wishart Distribution Continuous distributions Multivariate continuous distributions Covariance and correlation Random matrices Conjugate prior distributions Exponential family distributions