HOME

TheInfoList



OR:

In
probability Probability is the branch of mathematics concerning numerical descriptions of how likely an event is to occur, or how likely it is that a proposition is true. The probability of an event is a number between 0 and 1, where, roughly speaking, ...
, and
statistics Statistics (from German: '' Statistik'', "description of a state, a country") is the discipline that concerns the collection, organization, analysis, interpretation, and presentation of data. In applying statistics to a scientific, indust ...
, a multivariate random variable or random vector is a list of mathematical variables each of whose value is unknown, either because the value has not yet occurred or because there is imperfect knowledge of its value. The individual variables in a random vector are grouped together because they are all part of a single mathematical system — often they represent different properties of an individual statistical unit. For example, while a given person has a specific age, height and weight, the representation of these features of ''an unspecified person'' from within a group would be a random vector. Normally each element of a random vector is a
real number In mathematics, a real number is a number that can be used to measure a ''continuous'' one-dimensional quantity such as a distance, duration or temperature. Here, ''continuous'' means that values can have arbitrarily small variations. Every ...
. Random vectors are often used as the underlying implementation of various types of aggregate
random variable A random variable (also called random quantity, aleatory variable, or stochastic variable) is a mathematical formalization of a quantity or object which depends on random events. It is a mapping or a function from possible outcomes (e.g., the po ...
s, e.g. a random matrix,
random tree In mathematics and computer science, a random tree is a tree or arborescence that is formed by a stochastic process. Types of random trees include: *Uniform spanning tree, a spanning tree of a given graph in which each different tree is equally ...
, random sequence,
stochastic process In probability theory and related fields, a stochastic () or random process is a mathematical object usually defined as a family of random variables. Stochastic processes are widely used as mathematical models of systems and phenomena that ap ...
, etc. More formally, a multivariate random variable is a column vector \mathbf = (X_1,\dots,X_n)^\mathsf (or its
transpose In linear algebra, the transpose of a matrix is an operator which flips a matrix over its diagonal; that is, it switches the row and column indices of the matrix by producing another matrix, often denoted by (among other notations). The tr ...
, which is a
row vector In linear algebra, a column vector with m elements is an m \times 1 matrix consisting of a single column of m entries, for example, \boldsymbol = \begin x_1 \\ x_2 \\ \vdots \\ x_m \end. Similarly, a row vector is a 1 \times n matrix for some n, c ...
) whose components are scalar-valued
random variable A random variable (also called random quantity, aleatory variable, or stochastic variable) is a mathematical formalization of a quantity or object which depends on random events. It is a mapping or a function from possible outcomes (e.g., the po ...
s on the same
probability space In probability theory, a probability space or a probability triple (\Omega, \mathcal, P) is a mathematical construct that provides a formal model of a random process or "experiment". For example, one can define a probability space which models t ...
as each other, (\Omega, \mathcal, P), where \Omega is the sample space, \mathcal is the sigma-algebra (the collection of all events), and P is the
probability measure In mathematics, a probability measure is a real-valued function defined on a set of events in a probability space that satisfies measure properties such as ''countable additivity''. The difference between a probability measure and the more ge ...
(a function returning each event's
probability Probability is the branch of mathematics concerning numerical descriptions of how likely an event is to occur, or how likely it is that a proposition is true. The probability of an event is a number between 0 and 1, where, roughly speaking, ...
).


Probability distribution

Every random vector gives rise to a probability measure on \mathbb^n with the
Borel algebra In mathematics, a Borel set is any set in a topological space that can be formed from open sets (or, equivalently, from closed sets) through the operations of countable union, countable intersection, and relative complement. Borel sets are name ...
as the underlying sigma-algebra. This measure is also known as the
joint probability distribution Given two random variables that are defined on the same probability space, the joint probability distribution is the corresponding probability distribution on all possible pairs of outputs. The joint distribution can just as well be considered ...
, the joint distribution, or the multivariate distribution of the random vector. The distributions of each of the component random variables X_i are called
marginal distribution In probability theory and statistics, the marginal distribution of a subset of a collection of random variables is the probability distribution of the variables contained in the subset. It gives the probabilities of various values of the variab ...
s. The
conditional probability distribution In probability theory and statistics, given two jointly distributed random variables X and Y, the conditional probability distribution of Y given X is the probability distribution of Y when X is known to be a particular value; in some cases the ...
of X_i given X_j is the probability distribution of X_i when X_j is known to be a particular value. The cumulative distribution function F_ : \R^n \mapsto ,1/math> of a random vector \mathbf=(X_1,\dots,X_n)^\mathsf is defined as where \mathbf = (x_1, \dots, x_n)^\mathsf.


Operations on random vectors

Random vectors can be subjected to the same kinds of algebraic operations as can non-random vectors: addition, subtraction, multiplication by a scalar, and the taking of inner products.


Affine transformations

Similarly, a new random vector \mathbf can be defined by applying an
affine transformation In Euclidean geometry, an affine transformation or affinity (from the Latin, ''affinis'', "connected with") is a geometric transformation that preserves lines and parallelism, but not necessarily Euclidean distances and angles. More generall ...
g\colon \mathbb^n \to \mathbb^n to a random vector \mathbf: :\mathbf=\mathcal\mathbf+b, where \mathcal is an n \times n matrix and b is an n \times 1 column vector. If \mathcal is an invertible matrix and \textstyle\mathbf has a probability density function f_, then the probability density of \mathbf is :f_(y)=\frac.


Invertible mappings

More generally we can study invertible mappings of random vectors. Let g be a one-to-one mapping from an open subset \mathcal of \mathbb^n onto a subset \mathcal of \mathbb^n, let g have continuous partial derivatives in \mathcal and let the
Jacobian determinant In vector calculus, the Jacobian matrix (, ) of a vector-valued function of several variables is the matrix of all its first-order partial derivatives. When this matrix is square, that is, when the function takes the same number of variables ...
of g be zero at no point of \mathcal. Assume that the real random vector \mathbf has a probability density function f_(\mathbf) and satisfies P(\mathbf \in \mathcal) = 1. Then the random vector \mathbf=g(\mathbf) is of probability density :\left. f_(\mathbf)=\frac \right , _ \mathbf(\mathbf \in R_\mathbf) where \mathbf denotes the
indicator function In mathematics, an indicator function or a characteristic function of a subset of a set is a function that maps elements of the subset to one, and all other elements to zero. That is, if is a subset of some set , one has \mathbf_(x)=1 if x\i ...
and set R_\mathbf = \ \subseteq \mathcal denotes support of \mathbf.


Expected value

The
expected value In probability theory, the expected value (also called expectation, expectancy, mathematical expectation, mean, average, or first moment) is a generalization of the weighted average. Informally, the expected value is the arithmetic mean of a ...
or mean of a random vector \mathbf is a fixed vector \operatorname mathbf/math> whose elements are the expected values of the respective random variables.


Covariance and cross-covariance


Definitions

The
covariance matrix In probability theory and statistics, a covariance matrix (also known as auto-covariance matrix, dispersion matrix, variance matrix, or variance–covariance matrix) is a square matrix giving the covariance between each pair of elements of ...
(also called second central moment or variance-covariance matrix) of an n \times 1 random vector is an n \times n
matrix Matrix most commonly refers to: * ''The Matrix'' (franchise), an American media franchise ** '' The Matrix'', a 1999 science-fiction action film ** "The Matrix", a fictional setting, a virtual reality environment, within ''The Matrix'' (franchi ...
whose (''i,j'')th element is the
covariance In probability theory and statistics, covariance is a measure of the joint variability of two random variables. If the greater values of one variable mainly correspond with the greater values of the other variable, and the same holds for the le ...
between the ''i'' th and the ''j'' th random variables. The covariance matrix is the expected value, element by element, of the n \times n matrix computed as mathbf-\operatorname[\mathbf mathbf-\operatorname[\mathbf^T, where the superscript T refers to the transpose of the indicated vector: By extension, the cross-covariance matrix between two random vectors \mathbf and \mathbf (\mathbf having n elements and \mathbf having p elements) is the n \times p matrix where again the matrix expectation is taken element-by-element in the matrix. Here the (''i,j'')th element is the covariance between the ''i'' th element of \mathbf and the ''j'' th element of \mathbf.


Properties

The covariance matrix is a
symmetric matrix In linear algebra, a symmetric matrix is a square matrix that is equal to its transpose. Formally, Because equal matrices have equal dimensions, only square matrices can be symmetric. The entries of a symmetric matrix are symmetric with ...
, i.e. :\operatorname_^T = \operatorname_. The covariance matrix is a
positive semidefinite matrix In mathematics, a symmetric matrix M with real entries is positive-definite if the real number z^\textsfMz is positive for every nonzero real column vector z, where z^\textsf is the transpose of More generally, a Hermitian matrix (that is, ...
, i.e. :\mathbf^T \operatorname_ \mathbf \ge 0 \quad \text \mathbf \in \mathbb^n. The cross-covariance matrix \operatorname mathbf,\mathbf/math> is simply the transpose of the matrix \operatorname mathbf,\mathbf/math>, i.e. :\operatorname_ = \operatorname_^T.


Uncorrelatedness

Two random vectors \mathbf=(X_1,...,X_m)^T and \mathbf=(Y_1,...,Y_n)^T are called uncorrelated if :\operatorname mathbf \mathbf^T= \operatorname mathbfoperatorname mathbfT. They are uncorrelated if and only if their cross-covariance matrix \operatorname_ is zero.


Correlation and cross-correlation


Definitions

The
correlation matrix In statistics, correlation or dependence is any statistical relationship, whether causal or not, between two random variables or bivariate data. Although in the broadest sense, "correlation" may indicate any type of association, in statistics ...
(also called second moment) of an n \times 1 random vector is an n \times n matrix whose (''i,j'')th element is the correlation between the ''i'' th and the ''j'' th random variables. The correlation matrix is the expected value, element by element, of the n \times n matrix computed as \mathbf \mathbf^T, where the superscript T refers to the transpose of the indicated vector: By extension, the cross-correlation matrix between two random vectors \mathbf and \mathbf (\mathbf having n elements and \mathbf having p elements) is the n \times p matrix


Properties

The correlation matrix is related to the covariance matrix by :\operatorname_ = \operatorname_ + \operatorname mathbfoperatorname mathbfT. Similarly for the cross-correlation matrix and the cross-covariance matrix: :\operatorname_ = \operatorname_ + \operatorname mathbfoperatorname mathbfT


Orthogonality

Two random vectors of the same size \mathbf=(X_1,...,X_n)^T and \mathbf=(Y_1,...,Y_n)^T are called orthogonal if :\operatorname mathbf^T \mathbf= 0.


Independence

Two random vectors \mathbf and \mathbf are called independent if for all \mathbf and \mathbf :F_(\mathbf) = F_(\mathbf) \cdot F_(\mathbf) where F_(\mathbf) and F_(\mathbf) denote the cumulative distribution functions of \mathbf and \mathbf andF_(\mathbf) denotes their joint cumulative distribution function. Independence of \mathbf and \mathbf is often denoted by \mathbf \perp\!\!\!\perp \mathbf. Written component-wise, \mathbf and \mathbf are called independent if for all x_1,\ldots,x_m,y_1,\ldots,y_n :F_(x_1,\ldots,x_m,y_1,\ldots,y_n) = F_(x_1,\ldots,x_m) \cdot F_(y_1,\ldots,y_n).


Characteristic function

The
characteristic function In mathematics, the term "characteristic function" can refer to any of several distinct concepts: * The indicator function of a subset, that is the function ::\mathbf_A\colon X \to \, :which for a given subset ''A'' of ''X'', has value 1 at points ...
of a random vector \mathbf with n components is a function \mathbb^n \to \mathbb that maps every vector \mathbf = (\omega_1,\ldots,\omega_n)^T to a complex number. It is defined by : \varphi_(\mathbf) = \operatorname \left e^ \right = \operatorname \left e^ \right /math>.


Further properties


Expectation of a quadratic form

One can take the expectation of a
quadratic form In mathematics, a quadratic form is a polynomial with terms all of degree two ("form" is another name for a homogeneous polynomial). For example, :4x^2 + 2xy - 3y^2 is a quadratic form in the variables and . The coefficients usually belong to ...
in the random vector \mathbf as follows: :\operatorname mathbf^A\mathbf= \operatorname mathbfA\operatorname mathbf+ \operatorname(A K_), where K_ is the covariance matrix of \mathbf and \operatorname refers to the trace of a matrix — that is, to the sum of the elements on its main diagonal (from upper left to lower right). Since the quadratic form is a scalar, so is its expectation. Proof: Let \mathbf be an m \times 1 random vector with \operatorname mathbf= \mu and \operatorname mathbf V and let A be an m \times m non-stochastic matrix. Then based on the formula for the covariance, if we denote \mathbf^T = \mathbf and \mathbf^T A^T = \mathbf, we see that: :\operatorname mathbf,\mathbf= \operatorname mathbf\mathbf^T\operatorname mathbfoperatorname mathbfT Hence :\begin \operatorname Y^T &= \operatorname ,Y\operatorname operatorname T \\ \operatorname ^T Az&= \operatorname ^T,z^T A^T+ \operatorname ^Toperatorname ^T A^T T \\ &=\operatorname ^T , z^T A^T+ \mu^T (\mu^T A^T)^T \\ &=\operatorname ^T , z^T A^T+ \mu^T A \mu , \end which leaves us to show that :\operatorname ^T , z^T A^T \operatorname(AV). This is true based on the fact that one can cyclically permute matrices when taking a trace without changing the end result (e.g.: \operatorname(AB) = \operatorname(BA)). We see that :\begin \operatorname ^T,z^T A^T&= \operatorname \left left(z^T - E(z^T) \right)\left(z^T A^T - E\left(z^T A^T \right) \right)^T \right\\ &= \operatorname \left (z^T - \mu^T) (z^T A^T - \mu^T A^T )^T \right\ &= \operatorname \left (z - \mu)^T (Az - A\mu) \right \end And since :\left( \right)^T \left( \right) is a scalar, then :(z - \mu)^T ( Az - A\mu)= \operatorname\left( \right) = \operatorname \left((z - \mu )^T A(z - \mu ) \right) trivially. Using the permutation we get: :\operatorname\left( \right) = \operatorname\left( \right), and by plugging this into the original formula we get: :\begin \operatorname \left \right&= E\left \right\\ &= E \left \operatorname\left( A(z - \mu )(z - \mu )^T \right) \right\\ &= \operatorname \left( \right) \\ &= \operatorname (A V). \end


Expectation of the product of two different quadratic forms

One can take the expectation of the product of two different quadratic forms in a zero-mean
Gaussian Carl Friedrich Gauss (1777–1855) is the eponym of all of the topics listed below. There are over 100 topics all named after this German mathematician and scientist, all in the fields of mathematics, physics, and astronomy. The English eponym ...
random vector \mathbf as follows: :\operatorname\left \mathbf^A\mathbf)(\mathbf^B\mathbf)\right= 2\operatorname(A K_ B K_) + \operatorname(A K_)\operatorname(B K_) where again K_ is the covariance matrix of \mathbf. Again, since both quadratic forms are scalars and hence their product is a scalar, the expectation of their product is also a scalar.


Applications


Portfolio theory

In
portfolio theory Modern portfolio theory (MPT), or mean-variance analysis, is a mathematical framework for assembling a portfolio of assets such that the expected return is maximized for a given level of risk. It is a formalization and extension of diversificati ...
in
finance Finance is the study and discipline of money, currency and capital assets. It is related to, but not synonymous with economics, the study of production, distribution, and consumption of money, assets, goods and services (the discipline of f ...
, an objective often is to choose a portfolio of risky assets such that the distribution of the random portfolio return has desirable properties. For example, one might want to choose the portfolio return having the lowest variance for a given expected value. Here the random vector is the vector \mathbf of random returns on the individual assets, and the portfolio return ''p'' (a random scalar) is the inner product of the vector of random returns with a vector ''w'' of portfolio weights — the fractions of the portfolio placed in the respective assets. Since ''p'' = ''w''T\mathbf, the expected value of the portfolio return is ''w''TE(\mathbf) and the variance of the portfolio return can be shown to be ''w''TC''w'', where C is the covariance matrix of \mathbf.


Regression theory

In
linear regression In statistics, linear regression is a linear approach for modelling the relationship between a scalar response and one or more explanatory variables (also known as dependent and independent variables). The case of one explanatory variable is cal ...
theory, we have data on ''n'' observations on a dependent variable ''y'' and ''n'' observations on each of ''k'' independent variables ''xj''. The observations on the dependent variable are stacked into a column vector ''y''; the observations on each independent variable are also stacked into column vectors, and these latter column vectors are combined into a design matrix ''X'' (not denoting a random vector in this context) of observations on the independent variables. Then the following regression equation is postulated as a description of the process that generated the data: :y = X \beta + e, where β is a postulated fixed but unknown vector of ''k'' response coefficients, and ''e'' is an unknown random vector reflecting random influences on the dependent variable. By some chosen technique such as
ordinary least squares In statistics, ordinary least squares (OLS) is a type of linear least squares method for choosing the unknown parameters in a linear regression model (with fixed level-one effects of a linear function of a set of explanatory variables) by the ...
, a vector \hat \beta is chosen as an estimate of β, and the estimate of the vector ''e'', denoted \hat e, is computed as :\hat e = y - X \hat \beta. Then the statistician must analyze the properties of \hat \beta and \hat e, which are viewed as random vectors since a randomly different selection of ''n'' cases to observe would have resulted in different values for them.


Vector time series

The evolution of a ''k''×1 random vector \mathbf through time can be modelled as a
vector autoregression Vector autoregression (VAR) is a statistical model used to capture the relationship between multiple quantities as they change over time. VAR is a type of stochastic process model. VAR models generalize the single-variable (univariate) autoregres ...
(VAR) as follows: :\mathbf_t = c + A_1 \mathbf_ + A_2 \mathbf_ + \cdots + A_p \mathbf_ + \mathbf_t, \, where the ''i''-periods-back vector observation \mathbf_ is called the ''i''-th lag of \mathbf, ''c'' is a ''k'' × 1 vector of constants ( intercepts), ''Ai'' is a time-invariant ''k'' × ''k''
matrix Matrix most commonly refers to: * ''The Matrix'' (franchise), an American media franchise ** '' The Matrix'', a 1999 science-fiction action film ** "The Matrix", a fictional setting, a virtual reality environment, within ''The Matrix'' (franchi ...
and \mathbf_t is a ''k'' × 1 random vector of
error An error (from the Latin ''error'', meaning "wandering") is an action which is inaccurate or incorrect. In some usages, an error is synonymous with a mistake. The etymology derives from the Latin term 'errare', meaning 'to stray'. In statistics ...
terms.


References


Further reading

*{{cite book , first=Henry , last=Stark , first2=John W. , last2=Woods , title=Probability, Statistics, and Random Processes for Engineers , publisher=Pearson , edition=Fourth , year=2012 , chapter=Random Vectors , pages=295–339 , isbn=978-0-13-231123-6 Multivariate statistics Algebra of random variables de:Zufallsvariable#Mehrdimensionale Zufallsvariable pl:Zmienna losowa#Uogólnienia