Complex Random Vector
   HOME

TheInfoList



OR:

In
probability theory Probability theory or probability calculus is the branch of mathematics concerned with probability. Although there are several different probability interpretations, probability theory treats the concept in a rigorous mathematical manner by expre ...
and
statistics Statistics (from German language, German: ', "description of a State (polity), state, a country") is the discipline that concerns the collection, organization, analysis, interpretation, and presentation of data. In applying statistics to a s ...
, a complex random vector is typically a
tuple In mathematics, a tuple is a finite sequence or ''ordered list'' of numbers or, more generally, mathematical objects, which are called the ''elements'' of the tuple. An -tuple is a tuple of elements, where is a non-negative integer. There is o ...
of
complex Complex commonly refers to: * Complexity, the behaviour of a system whose components interact in multiple ways so possible interactions are difficult to describe ** Complex system, a system composed of many components which may interact with each ...
-valued
random variable A random variable (also called random quantity, aleatory variable, or stochastic variable) is a Mathematics, mathematical formalization of a quantity or object which depends on randomness, random events. The term 'random variable' in its mathema ...
s, and generally is a random variable taking values in a
vector space In mathematics and physics, a vector space (also called a linear space) is a set (mathematics), set whose elements, often called vector (mathematics and physics), ''vectors'', can be added together and multiplied ("scaled") by numbers called sc ...
over the field of complex numbers. If Z_1,\ldots,Z_n are complex-valued random variables, then the ''n''-tuple \left( Z_1,\ldots,Z_n \right) is a complex random vector. Complex random variables can always be considered as pairs of real random vectors: their real and imaginary parts. Some concepts of real random vectors have a straightforward generalization to complex random vectors. For example, the definition of the
mean A mean is a quantity representing the "center" of a collection of numbers and is intermediate to the extreme values of the set of numbers. There are several kinds of means (or "measures of central tendency") in mathematics, especially in statist ...
of a complex random vector. Other concepts are unique to complex random vectors. Applications of complex random vectors are found in
digital signal processing Digital signal processing (DSP) is the use of digital processing, such as by computers or more specialized digital signal processors, to perform a wide variety of signal processing operations. The digital signals processed in this manner are a ...
.


Definition

A complex random vector \mathbf = (Z_1,\ldots,Z_n)^T on the
probability space In probability theory, a probability space or a probability triple (\Omega, \mathcal, P) is a mathematical construct that provides a formal model of a random process or "experiment". For example, one can define a probability space which models ...
(\Omega,\mathcal,P) is a function \mathbf \colon \Omega \rightarrow \mathbb^n such that the vector (\Re,\Im,\ldots,\Re,\Im)^T is a real random vector on (\Omega,\mathcal,P) where \Re denotes the real part of z and \Im denotes the imaginary part of z.


Cumulative distribution function

The generalization of the cumulative distribution function from real to complex random variables is not obvious because expressions of the form P(Z \leq 1+3i) make no sense. However expressions of the form P(\Re \leq 1, \Im \leq 3) make sense. Therefore, the cumulative distribution function F_ : \mathbb^n \mapsto ,1/math> of a random vector \mathbf=(Z_1,...,Z_n)^T is defined as where \mathbf = (z_1,...,z_n)^T.


Expectation

As in the real case the expectation (also called
expected value In probability theory, the expected value (also called expectation, expectancy, expectation operator, mathematical expectation, mean, expectation value, or first Moment (mathematics), moment) is a generalization of the weighted average. Informa ...
) of a complex random vector is taken component-wise.


Covariance matrix and pseudo-covariance matrix

The '' covariance matrix'' (also called ''second central moment'') \operatorname_ contains the covariances between all pairs of components. The covariance matrix of an n \times 1 random vector is an n \times n matrix whose (i,j)th element is the covariance between the ''i'' th and the ''j'' th random variables. Unlike in the case of real random variables, the covariance between two random variables involves the
complex conjugate In mathematics, the complex conjugate of a complex number is the number with an equal real part and an imaginary part equal in magnitude but opposite in sign. That is, if a and b are real numbers, then the complex conjugate of a + bi is a - ...
of one of the two. Thus the covariance matrix is a
Hermitian matrix In mathematics, a Hermitian matrix (or self-adjoint matrix) is a complex square matrix that is equal to its own conjugate transpose—that is, the element in the -th row and -th column is equal to the complex conjugate of the element in the ...
. : \operatorname_= \begin \mathrm Z_1 - \operatorname[Z_1\overline">_1.html" ;"title="Z_1 - \operatorname[Z_1">Z_1 - \operatorname[Z_1\overline& \mathrm Z_1 - \operatorname[Z_1\overline">_1.html" ;"title="Z_1 - \operatorname[Z_1">Z_1 - \operatorname[Z_1\overline& \cdots & \mathrm Z_1 - \operatorname[Z_1\overline">_1.html" ;"title="Z_1 - \operatorname[Z_1">Z_1 - \operatorname[Z_1\overline\\ \\ \mathrm[(Z_2 - \operatorname _2\overline] & \mathrm[(Z_2 - \operatorname _2\overline] & \cdots & \mathrm[(Z_2 - \operatorname _2\overline] \\ \\ \vdots & \vdots & \ddots & \vdots \\ \\ \mathrm Z_n - \operatorname[Z_n\overline">_n">Z_n - \operatorname[Z_n\overline& \mathrm Z_n - \operatorname[Z_n\overline">_n">Z_n - \operatorname[Z_n\overline& \cdots & \mathrm Z_n - \operatorname[Z_n\overline] \end The ''pseudo-covariance matrix'' (also called ''relation matrix'') is defined replacing #Conjugate transpose, Hermitian transposition by #Transpose, transposition in the definition above. : \operatorname_= \begin \mathrm Z_1 - \operatorname[Z_1(Z_1 - \operatorname[Z_1">_1.html" ;"title="Z_1 - \operatorname[Z_1">Z_1 - \operatorname[Z_1(Z_1 - \operatorname[Z_1] & \mathrm Z_1 - \operatorname[Z_1(Z_2 - \operatorname _2] & \cdots & \mathrm Z_1 - \operatorname[Z_1(Z_n - \operatorname[Z_n">_1.html" ;"title="Z_1 - \operatorname[Z_1">Z_1 - \operatorname[Z_1(Z_n - \operatorname[Z_n] \\ \\ \mathrm[(Z_2 - \operatorname _2(Z_1 - \operatorname[Z_1])] & \mathrm[(Z_2 - \operatorname _2(Z_2 - \operatorname _2] & \cdots & \mathrm[(Z_2 - \operatorname _2(Z_n - \operatorname _n] \\ \\ \vdots & \vdots & \ddots & \vdots \\ \\ \mathrm Z_n - \operatorname[Z_n(Z_1 - \operatorname[Z_1">_n.html" ;"title="Z_n - \operatorname[Z_n">Z_n - \operatorname[Z_n(Z_1 - \operatorname[Z_1] & \mathrm Z_n - \operatorname[Z_n(Z_2 - \operatorname _2] & \cdots & \mathrm Z_n - \operatorname[Z_n(Z_n - \operatorname _n">_n.html" ;"title="Z_n - \operatorname[Z_n">Z_n - \operatorname[Z_n(Z_n - \operatorname _n\end ;Properties The covariance matrix is a hermitian matrix, i.e. :\operatorname_^H = \operatorname_. The pseudo-covariance matrix is a symmetric matrix, i.e. :\operatorname_^T = \operatorname_. The covariance matrix is a positive semidefinite matrix, i.e. :\mathbf^H \operatorname_ \mathbf \ge 0 \quad \text \mathbf \in \mathbb^n.


Covariance matrices of real and imaginary parts

By decomposing the random vector \mathbf into its real part \mathbf = \Re and imaginary part \mathbf = \Im (i.e. \mathbf=\mathbf+i\mathbf), the pair (\mathbf,\mathbf) has a covariance matrix of the form: :\begin \operatorname_ & \operatorname_ \\ \operatorname_ & \operatorname_ \end The matrices \operatorname_ and \operatorname_ can be related to the covariance matrices of \mathbf and \mathbf via the following expressions: : \begin & \operatorname_ = \operatorname \mathbf-\operatorname[\mathbf(\mathbf-\operatorname[\mathbf">mathbf.html" ;"title="\mathbf-\operatorname[\mathbf">\mathbf-\operatorname[\mathbf(\mathbf-\operatorname[\mathbf^\mathrm T] = \tfrac\operatorname(\operatorname_ + \operatorname_) \\ & \operatorname_ = \operatorname \mathbf-\operatorname[\mathbf(\mathbf-\operatorname[\mathbf">mathbf.html" ;"title="\mathbf-\operatorname[\mathbf">\mathbf-\operatorname[\mathbf(\mathbf-\operatorname[\mathbf^\mathrm T] = \tfrac\operatorname(\operatorname_ - \operatorname_) \\ & \operatorname_ = \operatorname \mathbf-\operatorname[\mathbf(\mathbf-\operatorname[\mathbf">mathbf.html" ;"title="\mathbf-\operatorname[\mathbf">\mathbf-\operatorname[\mathbf(\mathbf-\operatorname[\mathbf^\mathrm T] = \tfrac\operatorname(\operatorname_ + \operatorname_) \\ & \operatorname_ = \operatorname \mathbf-\operatorname[\mathbf(\mathbf-\operatorname[\mathbf">mathbf.html" ;"title="\mathbf-\operatorname[\mathbf">\mathbf-\operatorname[\mathbf(\mathbf-\operatorname[\mathbf^\mathrm T] = \tfrac\operatorname(\operatorname_ -\operatorname_) \\ \end Conversely: : \begin & \operatorname_ = \operatorname_ + \operatorname_ + i(\operatorname_ - \operatorname_) \\ & \operatorname_ = \operatorname_ - \operatorname_ + i(\operatorname_ + \operatorname_) \end


Cross-covariance matrix and pseudo-cross-covariance matrix

The cross-covariance matrix between two complex random vectors \mathbf,\mathbf is defined as: :\operatorname_ = \begin \mathrm Z_1 - \operatorname[Z_1\overline">_1.html" ;"title="Z_1 - \operatorname[Z_1">Z_1 - \operatorname[Z_1\overline& \mathrm Z_1 - \operatorname[Z_1\overline">_1.html" ;"title="Z_1 - \operatorname[Z_1">Z_1 - \operatorname[Z_1\overline& \cdots & \mathrm Z_1 - \operatorname[Z_1\overline">_1.html" ;"title="Z_1 - \operatorname[Z_1">Z_1 - \operatorname[Z_1\overline\\ \\ \mathrm[(Z_2 - \operatorname _2\overline] & \mathrm[(Z_2 - \operatorname _2\overline] & \cdots & \mathrm[(Z_2 - \operatorname _2\overline] \\ \\ \vdots & \vdots & \ddots & \vdots \\ \\ \mathrm Z_n - \operatorname[Z_n\overline">_n">Z_n - \operatorname[Z_n\overline& \mathrm Z_n - \operatorname[Z_n\overline">_n">Z_n - \operatorname[Z_n\overline& \cdots & \mathrm Z_n - \operatorname[Z_n\overline] \end And the pseudo-cross-covariance matrix is defined as: :\operatorname_ = \begin \mathrm Z_1 - \operatorname[Z_1(W_1 - \operatorname[W_1">_1">Z_1 - \operatorname[Z_1(W_1 - \operatorname[W_1">_1<_a>(W_1_-_\operatorname[W_1.html" ;"title="_1">Z_1 - \operatorname[Z_1(W_1 - \operatorname[W_1">_1">Z_1 - \operatorname[Z_1(W_1 - \operatorname[W_1& \mathrm Z_1 - \operatorname[Z_1(W_2 - \operatorname[W_2])] & \cdots & \mathrm Z_1 - \operatorname[Z_1(W_n - \operatorname[W_n])] \\ \\ \mathrm[(Z_2 - \operatorname _2(W_1 - \operatorname _1] & \mathrm _2(W_2 - \operatorname Z_2 - \operatorname _2(W_2 - \operatorname[W_2">_2">Z_2 - \operatorname _2(W_2 - \operatorname[W_2& \cdots & \mathrm[(Z_2 - \operatorname _2(W_n - \operatorname[W_n])] \\ \\ \vdots & \vdots & \ddots & \vdots \\ \\ \mathrm Z_n - \operatorname[Z_n(W_1 - \operatorname _1] & \mathrm Z_n - \operatorname[Z_n(W_2 - \operatorname[W_2">_n.html" ;"title="Z_n - \operatorname[Z_n">Z_n - \operatorname[Z_n(W_2 - \operatorname[W_2] & \cdots & \mathrm Z_n - \operatorname[Z_n(W_n - \operatorname[W_n])] \end Two complex random vectors \mathbf and \mathbf are called uncorrelated if :\operatorname_=\operatorname_=0.


Independence

Two complex random vectors \mathbf=(Z_1,...,Z_m)^T and \mathbf=(W_1,...,W_n)^T are called independent if where F_(\mathbf) and F_(\mathbf) denote the cumulative distribution functions of \mathbf and \mathbf as defined in and F_(\mathbf) denotes their joint cumulative distribution function. Independence of \mathbf and \mathbf is often denoted by \mathbf \perp\!\!\!\perp \mathbf. Written component-wise, \mathbf and \mathbf are called independent if :F_(z_1,\ldots,z_m,w_1,\ldots,w_n) = F_(z_1,\ldots,z_m) \cdot F_(w_1,\ldots,w_n) \quad \text z_1,\ldots,z_m,w_1,\ldots,w_n.


Circular symmetry

A complex random vector \mathbf is called circularly symmetric if for every deterministic \varphi \in [-\pi,\pi) the distribution of e^\mathbf equals the distribution of \mathbf . ;Properties * The expectation of a circularly symmetric complex random vector is either zero or it is not defined. * The pseudo-covariance matrix of a circularly symmetric complex random vector is zero.


Proper complex random vectors

A complex random vector \mathbf is called proper if the following three conditions are all satisfied: * \operatorname[\mathbf] = 0 (zero mean) * \operatorname[Z_1] < \infty , \ldots , \operatorname[Z_n] < \infty (all components have finite variance) * \operatorname[\mathbf\mathbf^T] = 0 Two complex random vectors \mathbf,\mathbf are called jointly proper is the composite random vector (Z_1,Z_2,\ldots,Z_m,W_1,W_2,\ldots,W_n)^T is proper. ;Properties * A complex random vector \mathbf is proper if, and only if, for all (deterministic) vectors \mathbf \in \mathbb^n the complex random variable \mathbf^T \mathbf is proper. * Linear transformations of proper complex random vectors are proper, i.e. if \mathbf is a proper random vectors with n components and A is a deterministic m \times n matrix, then the complex random vector A \mathbf is also proper. * Every circularly symmetric complex random vector with finite variance of all its components is proper. * There are proper complex random vectors that are not circularly symmetric. * A real random vector is proper if and only if it is constant. * Two jointly proper complex random vectors are uncorrelated if and only if their covariance matrix is zero, i.e. if \operatorname_ = 0.


Cauchy-Schwarz inequality

The Cauchy-Schwarz inequality for complex random vectors is :\left, \operatorname mathbf^H \mathbf\^2 \leq \operatorname mathbf^H \mathbf\operatorname Characteristic function (probability theory), characteristic function of a complex random vector \mathbf with n components is a function \mathbb^n \to \mathbb defined by: : \varphi_(\mathbf) = \operatorname \left e^ \right = \operatorname \left e^ \right /math>


See also

* Complex normal distribution * Complex random variable (scalar case)


References

{{reflist Probability theory Randomness Algebra of random variables>\mathbf^H \mathbf, /math>.


Characteristic function

The Characteristic function (probability theory), characteristic function of a complex random vector \mathbf with n components is a function \mathbb^n \to \mathbb defined by: : \varphi_(\mathbf) = \operatorname \left e^ \right = \operatorname \left e^ \right /math>


See also

* Complex normal distribution * Complex random variable (scalar case)


References

{{reflist Probability theory Randomness Algebra of random variables