In
probability theory
Probability theory or probability calculus is the branch of mathematics concerned with probability. Although there are several different probability interpretations, probability theory treats the concept in a rigorous mathematical manner by expre ...
, the multidimensional Chebyshev's inequality
is a generalization of
Chebyshev's inequality
In probability theory, Chebyshev's inequality (also called the Bienaymé–Chebyshev inequality) provides an upper bound on the probability of deviation of a random variable (with finite variance) from its mean. More specifically, the probability ...
, which puts a bound on the probability of the event that a
random variable
A random variable (also called random quantity, aleatory variable, or stochastic variable) is a Mathematics, mathematical formalization of a quantity or object which depends on randomness, random events. The term 'random variable' in its mathema ...
differs from its
expected value
In probability theory, the expected value (also called expectation, expectancy, expectation operator, mathematical expectation, mean, expectation value, or first Moment (mathematics), moment) is a generalization of the weighted average. Informa ...
by more than a specified amount.
Let
be an
-dimensional
random vector
In probability, and statistics, a multivariate random variable or random vector is a list or vector of mathematical variables each of whose value is unknown, either because the value has not yet occurred or because there is imperfect knowledge ...
with
expected value
In probability theory, the expected value (also called expectation, expectancy, expectation operator, mathematical expectation, mean, expectation value, or first Moment (mathematics), moment) is a generalization of the weighted average. Informa ...
and
covariance matrix
In probability theory and statistics, a covariance matrix (also known as auto-covariance matrix, dispersion matrix, variance matrix, or variance–covariance matrix) is a square matrix giving the covariance between each pair of elements of ...
:
If
is a
positive-definite matrix
In mathematics, a symmetric matrix M with real entries is positive-definite if the real number \mathbf^\mathsf M \mathbf is positive for every nonzero real column vector \mathbf, where \mathbf^\mathsf is the row vector transpose of \mathbf.
Mo ...
, for any
real number
In mathematics, a real number is a number that can be used to measure a continuous one- dimensional quantity such as a duration or temperature. Here, ''continuous'' means that pairs of values can have arbitrarily small differences. Every re ...
:
:
Proof
Since
is positive-definite, so is
. Define the random variable
:
Since
is positive,
Markov's inequality
In probability theory, Markov's inequality gives an upper bound on the probability that a non-negative random variable is greater than or equal to some positive Constant (mathematics), constant. Markov's inequality is tight in the sense that for e ...
holds:
:
Finally,
:
[
]
Infinite dimensions
There is a straightforward extension of the vector version of Chebyshev's inequality to infinite dimensional settings ore refs. needed/sup>. Let be a random variable which takes values in a Fréchet space
In functional analysis and related areas of mathematics, Fréchet spaces, named after Maurice Fréchet, are special topological vector spaces.
They are generalizations of Banach spaces ( normed vector spaces that are complete with respect to ...
(equipped with seminorms ). This includes most common settings of vector-valued random variables, e.g., when is a Banach space
In mathematics, more specifically in functional analysis, a Banach space (, ) is a complete normed vector space. Thus, a Banach space is a vector space with a metric that allows the computation of vector length and distance between vectors and ...
(equipped with a single norm), a Hilbert space
In mathematics, a Hilbert space is a real number, real or complex number, complex inner product space that is also a complete metric space with respect to the metric induced by the inner product. It generalizes the notion of Euclidean space. The ...
, or the finite-dimensional setting as described above.
Suppose that is of " strong order two", meaning that
:
for every seminorm . This is a generalization of the requirement that have finite variance, and is necessary for this strong form of Chebyshev's inequality in infinite dimensions. The terminology "strong order two" is due to Vakhania.[Vakhania, Nikolai Nikolaevich. Probability distributions on linear spaces. New York: North Holland, 1981.]
Let be the Pettis integral of (i.e., the vector generalization of the mean), and let
:
be the standard deviation with respect to the seminorm . In this setting we can state the following:
:General version of Chebyshev's inequality.
Proof. The proof is straightforward, and essentially the same as the finitary version ource needed/sup>. If , then is constant (and equal to ) almost surely, so the inequality is trivial.
If
:
then , so we may safely divide by . The crucial trick in Chebyshev's inequality is to recognize that .
The following calculations complete the proof:
:
References
{{Reflist
Probabilistic inequalities
Statistical inequalities