HOME

TheInfoList



OR:

In
probability theory Probability theory or probability calculus is the branch of mathematics concerned with probability. Although there are several different probability interpretations, probability theory treats the concept in a rigorous mathematical manner by expre ...
and
statistics Statistics (from German language, German: ', "description of a State (polity), state, a country") is the discipline that concerns the collection, organization, analysis, interpretation, and presentation of data. In applying statistics to a s ...
, a cross-covariance matrix is a
matrix Matrix (: matrices or matrixes) or MATRIX may refer to: Science and mathematics * Matrix (mathematics), a rectangular array of numbers, symbols or expressions * Matrix (logic), part of a formula in prenex normal form * Matrix (biology), the m ...
whose element in the ''i'', ''j'' position is the
covariance In probability theory and statistics, covariance is a measure of the joint variability of two random variables. The sign of the covariance, therefore, shows the tendency in the linear relationship between the variables. If greater values of one ...
between the ''i''-th element of a
random vector In probability, and statistics, a multivariate random variable or random vector is a list or vector of mathematical variables each of whose value is unknown, either because the value has not yet occurred or because there is imperfect knowledge ...
and ''j''-th element of another random vector. When the two random vectors are the same, the cross-covariance matrix is referred to as
covariance matrix In probability theory and statistics, a covariance matrix (also known as auto-covariance matrix, dispersion matrix, variance matrix, or variance–covariance matrix) is a square matrix giving the covariance between each pair of elements of ...
. A random vector is a
random variable A random variable (also called random quantity, aleatory variable, or stochastic variable) is a Mathematics, mathematical formalization of a quantity or object which depends on randomness, random events. The term 'random variable' in its mathema ...
with multiple dimensions. Each element of the vector is a scalar random variable. Each element has either a finite number of ''observed'' empirical values or a finite or infinite number of ''potential'' values. The potential values are specified by a theoretical
joint probability distribution A joint or articulation (or articular surface) is the connection made between bones, ossicles, or other hard structures in the body which link an animal's skeletal system into a functional whole.Saladin, Ken. Anatomy & Physiology. 7th ed. McGraw- ...
. Intuitively, the cross-covariance matrix generalizes the notion of covariance to multiple dimensions. The cross-covariance matrix of two random vectors \mathbf and \mathbf is typically denoted by \operatorname_ or \Sigma_.


Definition

For
random vector In probability, and statistics, a multivariate random variable or random vector is a list or vector of mathematical variables each of whose value is unknown, either because the value has not yet occurred or because there is imperfect knowledge ...
s \mathbf and \mathbf, each containing
random element In probability theory, random element is a generalization of the concept of random variable to more complicated spaces than the simple real line. The concept was introduced by who commented that the ā€œdevelopment of probability theory and expansio ...
s whose
expected value In probability theory, the expected value (also called expectation, expectancy, expectation operator, mathematical expectation, mean, expectation value, or first Moment (mathematics), moment) is a generalization of the weighted average. Informa ...
and
variance In probability theory and statistics, variance is the expected value of the squared deviation from the mean of a random variable. The standard deviation (SD) is obtained as the square root of the variance. Variance is a measure of dispersion ...
exist, the cross-covariance matrix of \mathbf and \mathbf is defined by where \mathbf = \operatorname mathbf/math> and \mathbf = \operatorname mathbf/math> are vectors containing the expected values of \mathbf and \mathbf. The vectors \mathbf and \mathbf need not have the same dimension, and either might be a scalar value. The cross-covariance matrix is the matrix whose (i,j) entry is the
covariance In probability theory and statistics, covariance is a measure of the joint variability of two random variables. The sign of the covariance, therefore, shows the tendency in the linear relationship between the variables. If greater values of one ...
:\operatorname_ = \operatorname _i, Y_j= \operatorname X_i - \operatorname[X_i(Y_j - \operatorname[Y_j">_i.html" ;"title="X_i - \operatorname[X_i">X_i - \operatorname[X_i(Y_j - \operatorname[Y_j] between the ''i''-th element of \mathbf and the ''j''-th element of \mathbf. This gives the following component-wise definition of the cross-covariance matrix. : \operatorname_= \begin \mathrm X_1 - \operatorname[X_1(Y_1 - \operatorname[Y_1">_1.html" ;"title="X_1 - \operatorname[X_1">X_1 - \operatorname[X_1(Y_1 - \operatorname[Y_1] & \mathrm X_1 - \operatorname[X_1(Y_2 - \operatorname[Y_2])] & \cdots & \mathrm X_1 - \operatorname[X_1(Y_n - \operatorname[Y_n">_1.html" ;"title="X_1 - \operatorname[X_1">X_1 - \operatorname[X_1(Y_n - \operatorname[Y_n] \\ \\ \mathrm[(X_2 - \operatorname[X_2])(Y_1 - \operatorname[Y_1])] & \mathrm[(X_2 - \operatorname[X_2])(Y_2 - \operatorname[Y_2])] & \cdots & \mathrm[(X_2 - \operatorname[X_2])(Y_n - \operatorname _n] \\ \\ \vdots & \vdots & \ddots & \vdots \\ \\ \mathrm X_m - \operatorname[X_m(Y_1 - \operatorname[Y_1">_m.html" ;"title="X_m - \operatorname[X_m">X_m - \operatorname[X_m(Y_1 - \operatorname[Y_1] & \mathrm X_m - \operatorname[X_m(Y_2 - \operatorname[Y_2])] & \cdots & \mathrm X_m - \operatorname[X_m(Y_n - \operatorname _n">_m.html" ;"title="X_m - \operatorname[X_m">X_m - \operatorname[X_m(Y_n - \operatorname _n\end


Example

For example, if \mathbf = \left( X_1,X_2,X_3 \right)^ and \mathbf = \left( Y_1,Y_2 \right)^ are random vectors, then \operatorname(\mathbf,\mathbf) is a 3 \times 2 matrix whose (i,j)-th entry is \operatorname(X_i,Y_j).


Properties

For the cross-covariance matrix, the following basic properties apply: # \operatorname(\mathbf,\mathbf) = \operatorname[\mathbf \mathbf^] - \mathbf \mathbf^ # \operatorname(\mathbf,\mathbf) = \operatorname(\mathbf,\mathbf)^ # \operatorname(\mathbf + \mathbf,\mathbf) = \operatorname(\mathbf,\mathbf) + \operatorname(\mathbf, \mathbf) # \operatorname(A\mathbf+ \mathbf, B^\mathbf + \mathbf) = A\, \operatorname(\mathbf, \mathbf) \,B # If \mathbf and \mathbf are independent (or somewhat less restrictedly, if every random variable in \mathbf is uncorrelated with every random variable in \mathbf), then \operatorname(\mathbf,\mathbf) = 0_ where \mathbf, \mathbf and \mathbf are random p \times 1 vectors, \mathbf is a random q \times 1 vector, \mathbf is a q \times 1 vector, \mathbf is a p \times 1 vector, A and B are q \times p matrices of constants, and 0_ is a p \times q matrix of zeroes.


Definition for complex random vectors

If \mathbf and \mathbf are complex random vectors, the definition of the cross-covariance matrix is slightly changed. Transposition is replaced by Hermitian transposition: :\operatorname_ = \operatorname(\mathbf,\mathbf) \stackrel\ \operatorname \mathbf-\mathbf)(\mathbf-\mathbf)^/math> For complex random vectors, another matrix called the pseudo-cross-covariance matrix is defined as follows: :\operatorname_ = \operatorname(\mathbf,\overline) \stackrel\ \operatorname \mathbf-\mathbf)(\mathbf-\mathbf)^/math>


Uncorrelatedness

Two random vectors \mathbf and \mathbf are called uncorrelated if their cross-covariance matrix \operatorname_ matrix is a zero matrix. Complex random vectors \mathbf and \mathbf are called uncorrelated if their covariance matrix and pseudo-covariance matrix is zero, i.e. if \operatorname_ = \operatorname_ = 0.


References

{{reflist Covariance and correlation Matrices (mathematics)