Multivariate Stable Distribution
   HOME

TheInfoList



OR:

The multivariate stable distribution is a multivariate
probability distribution In probability theory and statistics, a probability distribution is a Function (mathematics), function that gives the probabilities of occurrence of possible events for an Experiment (probability theory), experiment. It is a mathematical descri ...
that is a multivariate generalisation of the univariate
stable distribution In probability theory, a distribution is said to be stable if a linear combination of two independent random variables with this distribution has the same distribution, up to location and scale parameters. A random variable is said to be st ...
. The multivariate stable distribution defines linear relations between
stable distribution In probability theory, a distribution is said to be stable if a linear combination of two independent random variables with this distribution has the same distribution, up to location and scale parameters. A random variable is said to be st ...
marginals. In the same way as for the univariate case, the distribution is defined in terms of its characteristic function. The multivariate stable distribution can also be thought as an extension of the multivariate normal distribution. It has parameter, ''α'', which is defined over the range 0 < ''α'' ≤ 2, and where the case ''α'' = 2 is equivalent to the multivariate normal distribution. It has an additional skew parameter that allows for non-symmetric distributions, where the multivariate normal distribution is symmetric.


Definition

Let \mathbb be the Euclidean unit sphere in \mathbb R^d, that is, \mathbb = \. A
random vector In probability, and statistics, a multivariate random variable or random vector is a list or vector of mathematical variables each of whose value is unknown, either because the value has not yet occurred or because there is imperfect knowledge ...
X has a multivariate stable distribution—denoted as X \sim S(\alpha, \Lambda, \delta)—, if the joint characteristic function of X is : \operatorname \exp(i u^T X) = \exp \left\, where 0 < ''α'' < 2, and for y\in\mathbb R :\nu(y,\alpha) =\begin -\operatorname(y) \tan(\pi \alpha / 2), y, ^\alpha & \alpha \ne 1, \\ (2/\pi)y \ln , y, & \alpha=1. \end This is essentially the result of Feldheim, that any stable random vector can be characterized by a spectral measure \Lambda (a finite measure on \mathbb S) and a shift vector \delta \in \mathbb R^d.


Parametrization using projections

Another way to describe a stable random vector is in terms of projections. For any vector u the projection u^TX is univariate \alpha-stable with some skewness \beta(u), scale \gamma(u), and some shift \delta(u). The notation X \sim S(\alpha,\beta,\gamma,\delta) is used if ''X'' is stable with u^TX \sim s(\alpha,\beta(u),\gamma(u),\delta(u)) for every u \in \mathbb R^d. This is called the projection parametrization. The spectral measure determines the projection parameter functions by: : \gamma(u) = \Bigl( \int_ , u^Ts, ^\alpha \, \Lambda(ds) \Bigr)^ : \beta(u) = \gamma(u)^ \int_, u^Ts, ^\alpha \operatorname(u^Ts) \, \Lambda(ds) : \delta(u)=\beginu^T \delta & \alpha \ne 1\\u^T \delta -\int_\frac u^Ts \ln, u^Ts, \, \Lambda(ds)&\alpha=1\end


Special cases

There are special cases where the multivariate characteristic function takes a simpler form. Define the characteristic function of a stable marginal as : \omega(y, \alpha,\beta) = \begin, y, ^\alpha\left -i \beta(\tan \frac)\operatorname(y)\right \alpha \ne 1\\ , y, \left y, \right& \alpha = 1\end


Isotropic multivariate stable distribution

The characteristic function is E \exp(i u^T X)=\exp\ The spectral measure is continuous and uniform, leading to radial/isotropic symmetry. For the multinormal case \alpha=2, this corresponds to independent components, but so is not the case when \alpha<2. Isotropy is a special case of ellipticity (see the next paragraph) – just take \Sigma to be a multiple of the identity matrix.


Elliptically contoured multivariate stable distribution

The elliptically contoured multivariate stable distribution is a special symmetric case of the multivariate stable distribution. If ''X'' is ''α''-stable and elliptically contoured, then it has joint characteristic function E \exp(i u^T X)=\exp\ for some shift vector \delta \in R^d (equal to the mean when it exists) and some positive definite matrix \Sigma (akin to a correlation matrix, although the usual definition of correlation fails to be meaningful). Note the relation to characteristic function of the multivariate normal distribution: E \exp(i u^T X)=\exp\ obtained when ''α'' = 2.


Independent components

The marginals are independent with X_j \sim S(\alpha, \beta_j, \gamma_j, \delta_j), then the characteristic function is : E \exp(i u^T X) = \exp\left\ Observe that when ''α'' = 2 this reduces again to the multivariate normal; note that the iid case and the isotropic case do not coincide when ''α'' < 2. Independent components is a special case of discrete spectral measure (see next paragraph), with the spectral measure supported by the standard unit vectors.


Discrete

If the spectral measure is discrete with mass \lambda_j at s_j \in \mathbb,j=1,\ldots,m the characteristic function is : E \exp(i u^T X)= \exp\left\


Linear properties

If X \sim S(\alpha, \beta(\cdot), \gamma(\cdot), \delta(\cdot)) is ''d''-dimensional, ''A'' is an ''m'' x ''d'' matrix, and b \in \mathbb^m, then ''AX + b'' is ''m''-dimensional \alpha-stable with scale function \gamma(A^T\cdot), skewness function \beta(A^T\cdot), and location function \delta(A^T\cdot) + b^T.


Inference in the independent component model

Bickson and Guestrin have shown how to compute inference in closed-form in a linear model (or equivalently a
factor analysis Factor analysis is a statistical method used to describe variability among observed, correlated variables in terms of a potentially lower number of unobserved variables called factors. For example, it is possible that variations in six observe ...
model), involving independent component models.D. Bickson and C. Guestrin. Inference in linear models with multivariate heavy-tails. In Neural Information Processing Systems (NIPS) 2010, Vancouver, Canada, Dec. 2010. https://www.cs.cmu.edu/~bickson/stable/ More specifically, let X_i \sim S(\alpha, \beta_, \gamma_, \delta_) (i=1,\ldots,n) be a family of i.i.d. unobserved univariates drawn from a
stable distribution In probability theory, a distribution is said to be stable if a linear combination of two independent random variables with this distribution has the same distribution, up to location and scale parameters. A random variable is said to be st ...
. Given a known linear relation matrix A of size n \times n, the observations Y_i = \sum_^n A_X_j are assumed to be distributed as a convolution of the hidden factors X_i, hence Y_i = S(\alpha, \beta_, \gamma_, \delta_). The inference task is to compute the most likely X_i, given the linear relation matrix A and the observations Y_i. This task can be computed in closed-form in O(''n''3). An application for this construction is multiuser detection with stable, non-Gaussian noise.


See also

* Multivariate Cauchy distribution * Multivariate normal distribution


Resources

* Mark Veillette's stable distribution matlab package http://www.mathworks.com/matlabcentral/fileexchange/37514 * The plots in this page where plotted using Danny Bickson's inference in linear-stable model Matlab package: https://www.cs.cmu.edu/~bickson/stable


Notes

{{DEFAULTSORT:Multivariate Stable Distribution Multivariate continuous distributions Probability distributions with non-finite variance