Wishart Distribution
   HOME

TheInfoList



OR:

In
statistics Statistics (from German language, German: ', "description of a State (polity), state, a country") is the discipline that concerns the collection, organization, analysis, interpretation, and presentation of data. In applying statistics to a s ...
, the Wishart distribution is a generalization of the
gamma distribution In probability theory and statistics, the gamma distribution is a versatile two-parameter family of continuous probability distributions. The exponential distribution, Erlang distribution, and chi-squared distribution are special cases of the g ...
to multiple dimensions. It is named in honor of John Wishart, who first formulated the distribution in 1928. Other names include Wishart ensemble (in
random matrix theory In probability theory and mathematical physics, a random matrix is a matrix-valued random variable—that is, a matrix in which some or all of its entries are sampled randomly from a probability distribution. Random matrix theory (RMT) is the s ...
, probability distributions over matrices are usually called "ensembles"), or Wishart–Laguerre ensemble (since its eigenvalue distribution involve
Laguerre polynomials In mathematics, the Laguerre polynomials, named after Edmond Laguerre (1834–1886), are nontrivial solutions of Laguerre's differential equation: xy'' + (1 - x)y' + ny = 0,\ y = y(x) which is a second-order linear differential equation. Thi ...
), or LOE, LUE, LSE (in analogy with GOE, GUE, GSE). It is a family of
probability distribution In probability theory and statistics, a probability distribution is a Function (mathematics), function that gives the probabilities of occurrence of possible events for an Experiment (probability theory), experiment. It is a mathematical descri ...
s defined over symmetric,
positive-definite In mathematics, positive definiteness is a property of any object to which a bilinear form or a sesquilinear form may be naturally associated, which is positive-definite. See, in particular: * Positive-definite bilinear form * Positive-definite ...
random matrices In probability theory and mathematical physics, a random matrix is a matrix-valued random variable—that is, a matrix in which some or all of its entries are sampled randomly from a probability distribution. Random matrix theory (RMT) is the ...
(i.e.
matrix Matrix (: matrices or matrixes) or MATRIX may refer to: Science and mathematics * Matrix (mathematics), a rectangular array of numbers, symbols or expressions * Matrix (logic), part of a formula in prenex normal form * Matrix (biology), the m ...
-valued
random variable A random variable (also called random quantity, aleatory variable, or stochastic variable) is a Mathematics, mathematical formalization of a quantity or object which depends on randomness, random events. The term 'random variable' in its mathema ...
s). These distributions are of great importance in the
estimation of covariance matrices In statistics, sometimes the covariance matrix of a multivariate random variable is not known but has to be estimation theory, estimated. Estimation of covariance matrices then deals with the question of how to approximate the actual covariance ma ...
in
multivariate statistics Multivariate statistics is a subdivision of statistics encompassing the simultaneous observation and analysis of more than one outcome variable, i.e., '' multivariate random variables''. Multivariate statistics concerns understanding the differ ...
. In
Bayesian statistics Bayesian statistics ( or ) is a theory in the field of statistics based on the Bayesian interpretation of probability, where probability expresses a ''degree of belief'' in an event. The degree of belief may be based on prior knowledge about ...
, the Wishart distribution is the
conjugate prior In Bayesian probability theory, if, given a likelihood function p(x \mid \theta), the posterior distribution p(\theta \mid x) is in the same probability distribution family as the prior probability distribution p(\theta), the prior and posteri ...
of the
inverse Inverse or invert may refer to: Science and mathematics * Inverse (logic), a type of conditional sentence which is an immediate inference made from another conditional sentence * Additive inverse, the inverse of a number that, when added to the ...
covariance-matrix of a multivariate-normal random vector.


Definition

Suppose is a matrix, each column of which is independently drawn from a -variate normal distribution with zero mean: :G = (g_i^1,\dots,g_i^n) \sim \mathcal_p(0,V). Then the Wishart distribution is the
probability distribution In probability theory and statistics, a probability distribution is a Function (mathematics), function that gives the probabilities of occurrence of possible events for an Experiment (probability theory), experiment. It is a mathematical descri ...
of the random matrix :S= G G^T = \sum_^n g_g_^T known as the
scatter matrix : ''For the notion in quantum mechanics, see scattering matrix.'' In multivariate statistics and probability theory, the scatter matrix is a statistic that is used to make estimation of covariance matrices, estimates of the covariance matrix, for ...
. One indicates that has that probability distribution by writing :S\sim W_p(V,n). The positive integer is the number of ''
degrees of freedom In many scientific fields, the degrees of freedom of a system is the number of parameters of the system that may vary independently. For example, a point in the plane has two degrees of freedom for translation: its two coordinates; a non-infinite ...
''. Sometimes this is written . For the matrix is invertible with probability if is invertible. If then this distribution is a
chi-squared distribution In probability theory and statistics, the \chi^2-distribution with k Degrees of freedom (statistics), degrees of freedom is the distribution of a sum of the squares of k Independence (probability theory), independent standard normal random vari ...
with degrees of freedom.


Occurrence

The Wishart distribution arises as the distribution of the sample covariance matrix for a sample from a
multivariate normal distribution In probability theory and statistics, the multivariate normal distribution, multivariate Gaussian distribution, or joint normal distribution is a generalization of the one-dimensional ( univariate) normal distribution to higher dimensions. One d ...
. It occurs frequently in
likelihood-ratio test In statistics, the likelihood-ratio test is a hypothesis test that involves comparing the goodness of fit of two competing statistical models, typically one found by maximization over the entire parameter space and another found after imposing ...
s in multivariate statistical analysis. It also arises in the spectral theory of
random matrices In probability theory and mathematical physics, a random matrix is a matrix-valued random variable—that is, a matrix in which some or all of its entries are sampled randomly from a probability distribution. Random matrix theory (RMT) is the ...
and in multidimensional Bayesian analysis. It is also encountered in wireless communications, while analyzing the performance of
Rayleigh fading Rayleigh fading is a statistical model for the effect of a propagation environment on a radio signal, such as that used by wireless devices. Rayleigh fading models assume that the magnitude of a signal that has passed through such a transmission ...
MIMO In radio, multiple-input and multiple-output (MIMO) () is a method for multiplying the capacity of a radio link using multiple transmission and receiving antennas to exploit multipath propagation. MIMO has become an essential element of wirel ...
wireless channels .


Probability density function

The Wishart distribution can be characterized by its
probability density function In probability theory, a probability density function (PDF), density function, or density of an absolutely continuous random variable, is a Function (mathematics), function whose value at any given sample (or point) in the sample space (the s ...
as follows: Let be a symmetric matrix of random variables that is positive semi-definite. Let be a (fixed) symmetric positive definite matrix of size . Then, if , has a Wishart distribution with degrees of freedom if it has the
probability density function In probability theory, a probability density function (PDF), density function, or density of an absolutely continuous random variable, is a Function (mathematics), function whose value at any given sample (or point) in the sample space (the s ...
: f_ (\mathbf X) = \frac^ e^ where \left, \ is the
determinant In mathematics, the determinant is a Scalar (mathematics), scalar-valued function (mathematics), function of the entries of a square matrix. The determinant of a matrix is commonly denoted , , or . Its value characterizes some properties of the ...
of \mathbf X and is the
multivariate gamma function In mathematics, the multivariate gamma function Γ''p'' is a generalization of the gamma function. It is useful in multivariate statistics, appearing in the probability density function of the Wishart and inverse Wishart distributions, and the m ...
defined as :\Gamma_p \left (\frac n 2 \right )= \pi^\prod_^p \Gamma\left( \frac - \frac \right ). The density above is not the joint density of all the p^2 elements of the random matrix (such density does not exist because of the symmetry constrains X_=X_), it is rather the joint density of p(p+1)/2 elements X_ for i\le j (, page 38). Also, the density formula above applies only to positive definite matrices \mathbf x; for other matrices the density is equal to zero.


Spectral density

The joint-eigenvalue density for the eigenvalues \lambda_1,\dots , \lambda_p\ge 0 of a random matrix \mathbf\sim W_p(\mathbf,n) is, : c_e^\prod \lambda_i^\prod_, \lambda_i-\lambda_j, where c_ is a constant. In fact the above definition can be extended to any real . If , then the Wishart no longer has a density—instead it represents a singular distribution that takes values in a lower-dimension subspace of the space of matrices.


Use in Bayesian statistics

In
Bayesian statistics Bayesian statistics ( or ) is a theory in the field of statistics based on the Bayesian interpretation of probability, where probability expresses a ''degree of belief'' in an event. The degree of belief may be based on prior knowledge about ...
, in the context of the
multivariate normal distribution In probability theory and statistics, the multivariate normal distribution, multivariate Gaussian distribution, or joint normal distribution is a generalization of the one-dimensional ( univariate) normal distribution to higher dimensions. One d ...
, the Wishart distribution is the conjugate prior to the
precision Precision, precise or precisely may refer to: Arts and media * ''Precision'' (march), the official marching music of the Royal Military College of Canada * "Precision" (song), by Big Sean * ''Precisely'' (sketch), a dramatic sketch by the Eng ...
matrix , where is the covariance matrix.


Choice of parameters

The least informative, proper Wishart prior is obtained by setting . A common choice for V leverages the fact that the mean of X ~''Wp''(V, ''n'') is ''n''V. Then V is chosen so that ''n''V equals an initial guess for X. For instance, when estimating a precision matrix Σ−1 ~ ''Wp''(V, ''n'') a reasonable choice for V would be ''n''−1Σ0−1, where Σ0 is some prior estimate for the covariance matrix Σ.


Properties


Log-expectation

The following formula plays a role in
variational Bayes Variational Bayesian methods are a family of techniques for approximating intractable integrals arising in Bayesian inference and machine learning. They are typically used in complex statistical models consisting of observed variables (usually ...
derivations for Bayes networks involving the Wishart distribution. From equation (2.63), :\operatorname \mathbf\\, = \psi_p\left(\frac n 2\right) + p \, \ln(2) + \ln, \mathbf, where \psi_p is the multivariate digamma function (the derivative of the log of the
multivariate gamma function In mathematics, the multivariate gamma function Γ''p'' is a generalization of the gamma function. It is useful in multivariate statistics, appearing in the probability density function of the Wishart and inverse Wishart distributions, and the m ...
).


Log-variance

The following variance computation could be of help in Bayesian statistics: : \operatorname\left \mathbf\ \,\right\sum_^p \psi_1\left(\frac 2\right) where \psi_1 is the trigamma function. This comes up when computing the Fisher information of the Wishart random variable.


Entropy

The
information entropy In information theory, the entropy of a random variable quantifies the average level of uncertainty or information associated with the variable's potential states or possible outcomes. This measures the expected amount of information needed ...
of the distribution has the following formula: :\operatorname\left , \mathbf \,\right= -\ln \left( B(\mathbf,n) \right) -\frac \operatorname\left \mathbf\\,\right+ \frac where is the
normalizing constant In probability theory, a normalizing constant or normalizing factor is used to reduce any probability function to a probability density function with total probability of one. For example, a Gaussian function can be normalized into a probabilit ...
of the distribution: :B(\mathbf,n) = \frac. This can be expanded as follows: : \begin \operatorname\left , \mathbf\, \right& = \frac \ln \left, \mathbf\ +\frac \ln 2 + \ln \Gamma_p \left(\frac \right) - \frac \operatorname\left \mathbf\\, \right+ \frac \\ pt&= \frac \ln\left, \mathbf\ + \frac \ln 2 + \ln\Gamma_p\left(\frac \right) - \frac 2 \left( \psi_p \left(\frac\right) + p\ln 2 + \ln\left, \mathbf\\right) + \frac \\ pt&= \frac \ln\left, \mathbf\ + \frac \ln 2 + \ln\Gamma_p\left(\frac n 2\right) - \frac \psi_p\left(\frac n 2 \right) - \frac 2 \left(p\ln 2 +\ln\left, \mathbf\ \right) + \frac \\ pt&= \frac \ln\left, \mathbf\ + \frac1 2 p(p+1) \ln 2 + \ln\Gamma_p\left(\frac n 2\right) - \frac \psi_p\left(\frac n 2 \right) + \frac \end


Cross-entropy

The
cross-entropy In information theory, the cross-entropy between two probability distributions p and q, over the same underlying set of events, measures the average number of bits needed to identify an event drawn from the set when the coding scheme used for the ...
of two Wishart distributions p_0 with parameters n_0, V_0 and p_1 with parameters n_1, V_1 is :\begin H(p_0, p_1) &= \operatorname_ , -\log p_1\, \ pt&= \operatorname_ \left , -\log \frac \right\ pt&= \tfrac 2 \log 2 + \tfrac 2 \log \left, \mathbf_1\ + \log \Gamma_(\tfrac 2) - \tfrac 2 \operatorname_\left \mathbf\\, \right+ \tfrac\operatorname_\left , \operatorname\left(\,\mathbf_1^\mathbf\,\right) \, \right\\ pt&= \tfrac \log 2 + \tfrac 2 \log \left, \mathbf_1\ + \log \Gamma_(\tfrac) - \tfrac \left( \psi_(\tfrac 2) + p_0 \log 2 + \log \left, \mathbf_0\\right)+ \tfrac \operatorname\left(\, \mathbf_1^ n_0 \mathbf_0\, \right) \\ pt&=-\tfrac \log \left, \, \mathbf_1^ \mathbf_0\, \ + \tfrac 2 \log \left, \mathbf_0\ + \tfrac 2 \operatorname\left(\, \mathbf_1^ \mathbf_0\right)+ \log \Gamma_\left(\tfrac\right) - \tfrac \psi_(\tfrac) + \tfrac \log 2 \end Note that when p_0=p_1 and n_0=n_1 we recover the entropy.


KL-divergence

The
Kullback–Leibler divergence In mathematical statistics, the Kullback–Leibler (KL) divergence (also called relative entropy and I-divergence), denoted D_\text(P \parallel Q), is a type of statistical distance: a measure of how much a model probability distribution is diff ...
of p_1 from p_0 is : \begin D_(p_0 \, p_1) & = H(p_0, p_1) - H(p_0) \\ pt & =-\frac 2 \log , \mathbf_1^ \mathbf_0, + \frac(\operatorname(\mathbf_1^ \mathbf_0) - p)+ \log \frac + \tfrac 2 \psi_p\left(\frac 2\right) \end


Characteristic function

The
characteristic function In mathematics, the term "characteristic function" can refer to any of several distinct concepts: * The indicator function of a subset, that is the function \mathbf_A\colon X \to \, which for a given subset ''A'' of ''X'', has value 1 at points ...
of the Wishart distribution is :\Theta \mapsto \operatorname\left \, \exp\left( \,i \operatorname\left(\,\mathbf\,\right)\,\right)\, \right= \left, \, 1 - 2i\, \,\, \^ where denotes expectation. (Here is any matrix with the same dimensions as , indicates the identity matrix, and is a square root of ). Properly interpreting this formula requires a little care, because noninteger complex powers are multivalued; when is noninteger, the correct branch must be determined via
analytic continuation In complex analysis, a branch of mathematics, analytic continuation is a technique to extend the domain of definition of a given analytic function. Analytic continuation often succeeds in defining further values of a function, for example in a ne ...
.


Theorem

If a random matrix has a Wishart distribution with degrees of freedom and variance matrix — write \mathbf\sim\mathcal_p(,m) — and is a matrix of
rank A rank is a position in a hierarchy. It can be formally recognized—for example, cardinal, chief executive officer, general, professor—or unofficial. People Formal ranks * Academic rank * Corporate title * Diplomatic rank * Hierarchy ...
, then :\mathbf\mathbf^T \sim \mathcal_q\left(^T,m\right).


Corollary 1

If is a nonzero constant vector, then: :\sigma_z^ \, ^T\mathbf \sim \chi_m^2. In this case, \chi_m^2 is the
chi-squared distribution In probability theory and statistics, the \chi^2-distribution with k Degrees of freedom (statistics), degrees of freedom is the distribution of a sum of the squares of k Independence (probability theory), independent standard normal random vari ...
and \sigma_z^2=^T (note that \sigma_z^2 is a constant; it is positive because is positive definite).


Corollary 2

Consider the case where (that is, the -th element is one and all others zero). Then corollary 1 above shows that :\sigma_^ \, w_\sim \chi^2_m gives the marginal distribution of each of the elements on the matrix's diagonal. George Seber points out that the Wishart distribution is not called the “multivariate chi-squared distribution” because the marginal distribution of the
off-diagonal element In linear algebra, a diagonal matrix is a matrix in which the entries outside the main diagonal are all zero; the term usually refers to square matrices. Elements of the main diagonal can either be zero or nonzero. An example of a 2×2 diagona ...
s is not chi-squared. Seber prefers to reserve the term multivariate for the case when all univariate marginals belong to the same family.


Estimator of the multivariate normal distribution

The Wishart distribution is the
sampling distribution In statistics, a sampling distribution or finite-sample distribution is the probability distribution of a given random-sample-based statistic. For an arbitrarily large number of samples where each sample, involving multiple observations (data poi ...
of the maximum-likelihood estimator (MLE) of the
covariance matrix In probability theory and statistics, a covariance matrix (also known as auto-covariance matrix, dispersion matrix, variance matrix, or variance–covariance matrix) is a square matrix giving the covariance between each pair of elements of ...
of a
multivariate normal distribution In probability theory and statistics, the multivariate normal distribution, multivariate Gaussian distribution, or joint normal distribution is a generalization of the one-dimensional ( univariate) normal distribution to higher dimensions. One d ...
. A derivation of the MLE uses the
spectral theorem In linear algebra and functional analysis, a spectral theorem is a result about when a linear operator or matrix can be diagonalized (that is, represented as a diagonal matrix in some basis). This is extremely useful because computations involvin ...
.


Bartlett decomposition

The Bartlett decomposition of a matrix from a -variate Wishart distribution with scale matrix and degrees of freedom is the factorization: :\mathbf = ^T^T, where is the Cholesky factor of , and: :\mathbf A = \begin c_1 & 0 & 0 & \cdots & 0\\ n_ & c_2 &0 & \cdots& 0 \\ n_ & n_ & c_3 & \cdots & 0\\ \vdots & \vdots & \vdots &\ddots & \vdots \\ n_ & n_ & n_ &\cdots & c_p \end where c_i^2 \sim \chi^2_ and independently. This provides a useful method for obtaining random samples from a Wishart distribution.


Marginal distribution of matrix elements

Let be a variance matrix characterized by
correlation coefficient A correlation coefficient is a numerical measure of some type of linear correlation, meaning a statistical relationship between two variables. The variables may be two columns of a given data set of observations, often called a sample, or two c ...
and its lower Cholesky factor: :\mathbf = \begin \sigma_1^2 & \rho \sigma_1 \sigma_2 \\ \rho \sigma_1 \sigma_2 & \sigma_2^2 \end, \qquad \mathbf = \begin \sigma_1 & 0 \\ \rho \sigma_2 & \sqrt \sigma_2 \end Multiplying through the Bartlett decomposition above, we find that a random sample from the Wishart distribution is :\mathbf = \begin \sigma_1^2 c_1^2 & \sigma_1 \sigma_2 \left (\rho c_1^2 + \sqrt c_1 n_ \right ) \\ \sigma_1 \sigma_2 \left (\rho c_1^2 + \sqrt c_1 n_ \right ) & \sigma_2^2 \left(\left (1-\rho^2 \right ) c_2^2 + \left (\sqrt n_ + \rho c_1 \right )^2 \right) \end The diagonal elements, most evidently in the first element, follow the distribution with degrees of freedom (scaled by ) as expected. The off-diagonal element is less familiar but can be identified as a
normal variance-mean mixture In probability theory and statistics, a normal variance-mean mixture with mixing probability density g is the continuous probability distribution of a random variable Y of the form :Y=\alpha + \beta V+\sigma \sqrtX, where \alpha, \beta and \sigma ...
where the mixing density is a distribution. The corresponding marginal probability density for the off-diagonal element is therefore the
variance-gamma distribution The variance-gamma distribution, generalized Laplace distribution or Bessel function distribution is a continuous probability distribution that is defined as the normal variance-mean mixture where the mixture density, mixing density is the gamma d ...
:f(x_) = \frac \cdot K_ \left(\frac\right) \exp where is the
modified Bessel function of the second kind Bessel functions, named after Friedrich Bessel who was the first to systematically study them in 1824, are canonical solutions of Bessel's differential equation x^2 \frac + x \frac + \left(x^2 - \alpha^2 \right)y = 0 for an arbitrary complex ...
. Similar results may be found for higher dimensions. In general, if X follows a Wishart distribution with parameters, \Sigma, n, then for i \neq j , the off-diagonal elements : X_ \sim \text(n, \Sigma_, (\Sigma_ \Sigma_ - \Sigma_^2)^, 0). It is also possible to write down the
moment-generating function In probability theory and statistics, the moment-generating function of a real-valued random variable is an alternative specification of its probability distribution. Thus, it provides the basis of an alternative route to analytical results compare ...
even in the ''noncentral'' case (essentially the ''n''th power of Craig (1936) equation 10) although the probability density becomes an infinite sum of Bessel functions.


The range of the shape parameter

It can be shown that the Wishart distribution can be defined if and only if the shape parameter belongs to the set :\Lambda_p:=\\cup \left(p-1,\infty\right). This set is named after
Simon Gindikin Simon Grigorevich Gindikin (; born 7 December 1937, Moscow, Russian SFSR) is a mathematician at Rutgers University Rutgers University ( ), officially Rutgers, The State University of New Jersey, is a Public university, public land-grant resea ...
, who introduced it in the 1970s in the context of gamma distributions on homogeneous cones. However, for the new parameters in the discrete spectrum of the Gindikin ensemble, namely, :\Lambda_p^*:=\, the corresponding Wishart distribution has no Lebesgue density.


Relationships to other distributions

* The Wishart distribution is related to the
inverse-Wishart distribution In statistics, the inverse Wishart distribution, also called the inverted Wishart distribution, is a probability distribution defined on real-valued positive-definite matrices. In Bayesian statistics it is used as the conjugate prior for the cov ...
, denoted by W_p^, as follows: If and if we do the change of variables , then \mathbf\sim W_p^(\mathbf^,n). This relationship may be derived by noting that the absolute value of the
Jacobian determinant In vector calculus, the Jacobian matrix (, ) of a vector-valued function of several variables is the matrix of all its first-order partial derivatives. If this matrix is square, that is, if the number of variables equals the number of components ...
of this change of variables is , see for example equation (15.15) in. * In
Bayesian statistics Bayesian statistics ( or ) is a theory in the field of statistics based on the Bayesian interpretation of probability, where probability expresses a ''degree of belief'' in an event. The degree of belief may be based on prior knowledge about ...
, the Wishart distribution is a
conjugate prior In Bayesian probability theory, if, given a likelihood function p(x \mid \theta), the posterior distribution p(\theta \mid x) is in the same probability distribution family as the prior probability distribution p(\theta), the prior and posteri ...
for the precision parameter of the
multivariate normal distribution In probability theory and statistics, the multivariate normal distribution, multivariate Gaussian distribution, or joint normal distribution is a generalization of the one-dimensional ( univariate) normal distribution to higher dimensions. One d ...
, when the mean parameter is known. * A generalization is the multivariate gamma distribution. * A different type of generalization is the normal-Wishart distribution, essentially the product of a
multivariate normal distribution In probability theory and statistics, the multivariate normal distribution, multivariate Gaussian distribution, or joint normal distribution is a generalization of the one-dimensional ( univariate) normal distribution to higher dimensions. One d ...
with a Wishart distribution.


See also

*
Chi-squared distribution In probability theory and statistics, the \chi^2-distribution with k Degrees of freedom (statistics), degrees of freedom is the distribution of a sum of the squares of k Independence (probability theory), independent standard normal random vari ...
*
Complex Wishart distribution In statistics, the complex Wishart distribution is a complex version of the Wishart distribution. It is the distribution of n times the sample Hermitian covariance matrix of n zero-mean independent Gaussian random variables. It has support for ...
*
F-distribution In probability theory and statistics, the ''F''-distribution or ''F''-ratio, also known as Snedecor's ''F'' distribution or the Fisher–Snedecor distribution (after Ronald Fisher and George W. Snedecor), is a continuous probability distribut ...
*
Gamma distribution In probability theory and statistics, the gamma distribution is a versatile two-parameter family of continuous probability distributions. The exponential distribution, Erlang distribution, and chi-squared distribution are special cases of the g ...
*
Hotelling's T-squared distribution In statistics, particularly in hypothesis testing, the Hotelling's ''T''-squared distribution (''T''2), proposed by Harold Hotelling, is a multivariate probability distribution that is tightly related to the ''F''-distribution and is most nota ...
*
Inverse-Wishart distribution In statistics, the inverse Wishart distribution, also called the inverted Wishart distribution, is a probability distribution defined on real-valued positive-definite matrices. In Bayesian statistics it is used as the conjugate prior for the cov ...
* Multivariate gamma distribution *
Student's t-distribution In probability theory and statistics, Student's  distribution (or simply the  distribution) t_\nu is a continuous probability distribution that generalizes the Normal distribution#Standard normal distribution, standard normal distribu ...
*
Wilks' lambda distribution In statistics, Wilks' lambda distribution (named for Samuel S. Wilks), is a probability distribution used in multivariate hypothesis testing, especially with regard to the likelihood-ratio test and multivariate analysis of variance (MANOVA). ...


References


External links


A C++ library for random matrix generator
{{DEFAULTSORT:Wishart Distribution Continuous distributions Multivariate continuous distributions Covariance and correlation Random matrices Conjugate prior distributions Exponential family distributions