Kolmogorov's inequality
   HOME

TheInfoList



OR:

In probability theory, Kolmogorov's inequality is a so-called "maximal inequality" that gives a bound on the probability that the partial sums of a finite collection of independent random variables exceed some specified bound.


Statement of the inequality

Let ''X''1, ..., ''X''''n'' : Ω → R be independent
random variable A random variable (also called random quantity, aleatory variable, or stochastic variable) is a mathematical formalization of a quantity or object which depends on random events. It is a mapping or a function from possible outcomes (e.g., the po ...
s defined on a common probability space (Ω, ''F'', Pr), with
expected value In probability theory, the expected value (also called expectation, expectancy, mathematical expectation, mean, average, or first moment) is a generalization of the weighted average. Informally, the expected value is the arithmetic mean of a l ...
E 'X''''k''nbsp;= 0 and variance Var 'X''''k''nbsp;< +∞ for ''k'' = 1, ..., ''n''. Then, for each λ > 0, :\Pr \left(\max_ , S_k , \geq\lambda\right)\leq \frac \operatorname _n\equiv \frac\sum_^n \operatorname _k\frac\sum_^\text _k^2 where ''S''''k'' = ''X''1 + ... + ''X''''k''. The convenience of this result is that we can bound the worst case deviation of a random walk at any point of time using its value at the end of time interval.


Proof

The following argument employs discrete martingales. As argued in the discussion of
Doob's martingale inequality In mathematics, Doob's martingale inequality, also known as Kolmogorov’s submartingale inequality is a result in the study of stochastic processes. It gives a bound on the probability that a submartingale exceeds any given value over a given inter ...
, the sequence S_1, S_2, \dots, S_n is a martingale. Define (Z_i)_^n as follows. Let Z_0 = 0, and :Z_ = \left\{ \begin{array}{ll} S_{i+1} & \text{ if } \displaystyle \max_{1 \leq j \leq i} , S_j , < \lambda \\ Z_i & \text{ otherwise} \end{array} \right. for all i. Then (Z_i)_{i=0}^n is also a martingale. For any martingale M_i with M_0 = 0, we have that \begin{align} \sum_{i=1}^n \text{E} (M_i - M_{i-1})^2&= \sum_{i=1}^n \text{E} M_i^2 - 2 M_i M_{i-1} + M_{i-1}^2 \\ &= \sum_{i=1}^n \text{E}\left M_i^2 - 2 (M_{i-1} + M_{i} - M_{i-1}) M_{i-1} + M_{i-1}^2 \right\\ &= \sum_{i=1}^n \text{E}\left M_i^2 - M_{i-1}^2 \right- 2\text{E}\left M_{i-1} (M_{i}-M_{i-1})\right\ &= \text{E} _n^2- \text{E} _0^2= \text{E} _n^2 \end{align} Applying this result to the martingale (S_i)_{i=0}^n, we have \begin{align} \text{Pr}\left( \max_{1 \leq i \leq n} , S_i, \geq \lambda\right) &= \text{Pr} _n^2=\frac{1}{\lambda^2}_\sum_{i=1}^.html" ;"title="Z_n, \geq \lambda] \\ &\leq \frac{1}{\lambda^2} \text{E} _n^2=\frac{1}{\lambda^2} \sum_{i=1}^n \text{E} Z_i - Z_{i-1})^2\\ &\leq \frac{1}{\lambda^2} \sum_{i=1}^n \text{E} S_i - S_{i-1})^2=\frac{1}{\lambda^2} \text{E} _n^2= \frac{1}{\lambda^2} \text{Var} _n\end{align} where the first inequality follows by
Chebyshev's inequality In probability theory, Chebyshev's inequality (also called the Bienaymé–Chebyshev inequality) guarantees that, for a wide class of probability distributions, no more than a certain fraction of values can be more than a certain distance from th ...
. This inequality was generalized by Hájek and Rényi in 1955.


See also

*
Chebyshev's inequality In probability theory, Chebyshev's inequality (also called the Bienaymé–Chebyshev inequality) guarantees that, for a wide class of probability distributions, no more than a certain fraction of values can be more than a certain distance from th ...
*
Etemadi's inequality In probability theory, Etemadi's inequality is a so-called "maximal inequality", an inequality that gives a bound on the probability that the partial sums of a finite collection of independent random variables exceed some specified bound. The result ...
*
Landau–Kolmogorov inequality In mathematics, the Landau–Kolmogorov inequality, named after Edmund Landau and Andrey Kolmogorov, is the following family of interpolation inequalities between different derivatives of a function ''f'' defined on a subset ''T'' of the real n ...
* Markov's inequality * Bernstein inequalities (probability theory)


References

* (Theorem 22.4) * * {{PlanetMath attribution, id=3687, title=Kolmogorov's inequality Stochastic processes Probabilistic inequalities Articles containing proofs>Z_n, \geq \lambda\\ &\leq \frac{1}{\lambda^2} \text{E} _n^2=\frac{1}{\lambda^2} \sum_{i=1}^n \text{E} Z_i - Z_{i-1})^2\\ &\leq \frac{1}{\lambda^2} \sum_{i=1}^n \text{E} S_i - S_{i-1})^2=\frac{1}{\lambda^2} \text{E} _n^2= \frac{1}{\lambda^2} \text{Var} _n\end{align} where the first inequality follows by
Chebyshev's inequality In probability theory, Chebyshev's inequality (also called the Bienaymé–Chebyshev inequality) guarantees that, for a wide class of probability distributions, no more than a certain fraction of values can be more than a certain distance from th ...
. This inequality was generalized by Hájek and Rényi in 1955.


See also

*
Chebyshev's inequality In probability theory, Chebyshev's inequality (also called the Bienaymé–Chebyshev inequality) guarantees that, for a wide class of probability distributions, no more than a certain fraction of values can be more than a certain distance from th ...
*
Etemadi's inequality In probability theory, Etemadi's inequality is a so-called "maximal inequality", an inequality that gives a bound on the probability that the partial sums of a finite collection of independent random variables exceed some specified bound. The result ...
*
Landau–Kolmogorov inequality In mathematics, the Landau–Kolmogorov inequality, named after Edmund Landau and Andrey Kolmogorov, is the following family of interpolation inequalities between different derivatives of a function ''f'' defined on a subset ''T'' of the real n ...
* Markov's inequality * Bernstein inequalities (probability theory)


References

* (Theorem 22.4) * * {{PlanetMath attribution, id=3687, title=Kolmogorov's inequality Stochastic processes Probabilistic inequalities Articles containing proofs