HOME



picture info

Sub-Gaussian
In probability theory, a subgaussian distribution, the distribution of a subgaussian random variable, is a probability distribution with strong tail decay. More specifically, the tails of a subgaussian distribution are dominated by (i.e. decay at least as fast as) the tails of a Gaussian. This property gives subgaussian distributions their name. Often in analysis, we divide an object (such as a random variable) into two parts, a central bulk and a distant tail, then analyze each separately. In probability, this division usually goes like "Everything interesting happens near the center. The tail event is so rare, we may safely ignore that." Subgaussian distributions are worthy of study, because the gaussian distribution is well-understood, and so we can give sharp bounds on the rarity of the tail event. Similarly, the subexponential distributions are also worthy of study. Formally, the probability distribution of a random variable ''X '' is called subgaussian if there is a posi ...
[...More Info...]      
[...Related Items...]     OR:     [Wikipedia]   [Google]   [Baidu]  




Hoeffding's Lemma
In probability theory, Hoeffding's lemma is an inequality that bounds the moment-generating function of any bounded random variable, implying that such variables are subgaussian. It is named after the Finnish– American mathematical statistician Wassily Hoeffding. The proof of Hoeffding's lemma uses Taylor's theorem and Jensen's inequality. Hoeffding's lemma is itself used in the proof of Hoeffding's inequality as well as the generalization McDiarmid's inequality. Statement Let ''X'' be any real-valued random variable such that a \leq X \leq b almost surely, i.e. with probability one. Then, for all \lambda \in \mathbb, :\mathbb \left e^ \right\leq \exp \Big(\lambda\mathbb \frac \Big), or equivalently, :\mathbb \left e^ \right\leq \exp \Big(\frac \Big). Proof The following proof is direct but somewhat ad-hoc. Another proof with a slightly worse constant are also available using symmetrization. Statement This statement and proof uses the language of subgaus ...
[...More Info...]      
[...Related Items...]     OR:     [Wikipedia]   [Google]   [Baidu]  


picture info

Gaussian Distribution
In probability theory and statistics, a normal distribution or Gaussian distribution is a type of continuous probability distribution for a real number, real-valued random variable. The general form of its probability density function is f(x) = \frac e^\,. The parameter is the Mean#Mean of a probability distribution, mean or expected value, expectation of the distribution (and also its median and mode (statistics), mode), while the parameter \sigma^2 is the variance. The standard deviation of the distribution is (sigma). A random variable with a Gaussian distribution is said to be normally distributed, and is called a normal deviate. Normal distributions are important in statistics and are often used in the natural science, natural and social sciences to represent real-valued random variables whose distributions are not known. Their importance is partly due to the central limit theorem. It states that, under some conditions, the average of many samples (observations) of a ...
[...More Info...]      
[...Related Items...]     OR:     [Wikipedia]   [Google]   [Baidu]  


Orlicz Space
In mathematical analysis, and especially in real, harmonic analysis and functional analysis, an Orlicz space is a type of function space which generalizes the ''L''''p'' spaces. Like the ''L''''p'' spaces, they are Banach spaces. The spaces are named for Władysław Orlicz, who was the first to define them in 1932. Besides the ''L''''p'' spaces, a variety of function spaces arising naturally in analysis are Orlicz spaces. One such space ''L'' log+ ''L'', which arises in the study of Hardy–Littlewood maximal functions, consists of measurable functions ''f'' such that the :\int_ , f(x), \log^+ , f(x), \,dx , is a Young function, i.e. Convex function">convex, lower semicontinuous, and non-trivial, in the sense that it is not the zero function x \mapsto 0 , and it is not the convex dual of the zero function x \mapsto \begin 0 \text x = 0, \\ +\infty \text\end Orlicz spaces Let L^\dagger_\Phi be the set of measurable functions ''f'' : ''X'' → R such that the inte ...
[...More Info...]      
[...Related Items...]     OR:     [Wikipedia]   [Google]   [Baidu]  


picture info

Chernoff Bound
In probability theory, a Chernoff bound is an exponentially decreasing upper bound on the tail of a random variable based on its moment generating function. The minimum of all such exponential bounds forms ''the'' Chernoff or Chernoff-Cramér bound, which may decay faster than exponential (e.g. sub-Gaussian). It is especially useful for sums of independent random variables, such as sums of Bernoulli random variables. The bound is commonly named after Herman Chernoff who described the method in a 1952 paper, though Chernoff himself attributed it to Herman Rubin. In 1938 Harald Cramér had published an almost identical concept now known as Cramér's theorem. It is a sharper bound than the first- or second-moment-based tail bounds such as Markov's inequality or Chebyshev's inequality, which only yield power-law bounds on tail decay. However, when applied to sums the Chernoff bound requires the random variables to be independent, a condition that is not required by either Markov's i ...
[...More Info...]      
[...Related Items...]     OR:     [Wikipedia]   [Google]   [Baidu]  


picture info

Probability Theory
Probability theory or probability calculus is the branch of mathematics concerned with probability. Although there are several different probability interpretations, probability theory treats the concept in a rigorous mathematical manner by expressing it through a set of axioms of probability, axioms. Typically these axioms formalise probability in terms of a probability space, which assigns a measure (mathematics), measure taking values between 0 and 1, termed the probability measure, to a set of outcomes called the sample space. Any specified subset of the sample space is called an event (probability theory), event. Central subjects in probability theory include discrete and continuous random variables, probability distributions, and stochastic processes (which provide mathematical abstractions of determinism, non-deterministic or uncertain processes or measured Quantity, quantities that may either be single occurrences or evolve over time in a random fashion). Although it is no ...
[...More Info...]      
[...Related Items...]     OR:     [Wikipedia]   [Google]   [Baidu]  


Rademacher Distribution
In probability theory and statistics, the Rademacher distribution (which is named after Hans Rademacher) is a discrete probability distribution where a random variate ''X'' has a 50% chance of being +1 and a 50% chance of being −1. A series (that is, a sum) of Rademacher distributed variables can be regarded as a simple symmetrical random walk where the step size is 1. Mathematical formulation The probability mass function of this distribution is : f(k) = \left\{\begin{matrix} 1/2 & \text{if }k=-1, \\ 1/2 & \text{if }k=+1, \\ 0 & \text{otherwise.}\end{matrix}\right. In terms of the Dirac delta function, as : f( k ) = \frac{ 1 }{ 2 } ( \delta (k - 1) + \delta (k + 1)). Bounds on sums of independent Rademacher variables There are various results in probability theory around analyzing the sum of i.i.d. Rademacher variables, including concentration inequalities such as Bernstein inequalities as well as anti-concentration inequalities like Tomaszewski's conjecture. Co ...
[...More Info...]      
[...Related Items...]     OR:     [Wikipedia]   [Google]   [Baidu]  


Matrix Norm
In the field of mathematics, norms are defined for elements within a vector space. Specifically, when the vector space comprises matrices, such norms are referred to as matrix norms. Matrix norms differ from vector norms in that they must also interact with matrix multiplication. Preliminaries Given a field \ K\ of either real or complex numbers (or any complete subset thereof), let \ K^\ be the -vector space of matrices with m rows and n columns and entries in the field \ K ~. A matrix norm is a norm on \ K^~. Norms are often expressed with double vertical bars (like so: \ \, A\, \ ). Thus, the matrix norm is a function \ \, \cdot\, : K^ \to \R^\ that must satisfy the following properties: For all scalars \ \alpha \in K\ and matrices \ A, B \in K^\ , * \, A\, \ge 0\ (''positive-valued'') * \, A\, = 0 \iff A=0_ (''definite'') * \left\, \alpha\ A \right\, = \left, \alpha \\ \left\, A\right\, \ (''absolutely homogeneous'') * \, A + B \, \le \, A \, + \, ...
[...More Info...]      
[...Related Items...]     OR:     [Wikipedia]   [Google]   [Baidu]  




Uniform Boundedness
In mathematics, a uniformly bounded family of functions is a family of bounded functions that can all be bounded by the same constant. This constant is larger than or equal to the absolute value of any value of any of the functions in the family. Definition Real line and complex plane Let :\mathcal F=\ be a family of functions indexed by I, where X is an arbitrary set and \mathbb is either the set of real \mathbb or complex numbers \mathbb. We call \mathcal F uniformly bounded if there exists a real number M>0 such that :, f_i(x), \le M \ , \qquad \forall i \in I \ , \quad \forall x \in X. Another way of stating this would be the following: :\sup\limits_ \sup\limits_ , f_i(x), \le M. Metric space In general let Y be a metric space with metric d, then the set :\mathcal F=\ is called uniformly bounded if there exists an element a from Y and a real number M such that :d(f_i(x), a) \leq M \qquad \forall i \in I \quad \forall x \in X. Examples * Every uniformly convergen ...
[...More Info...]      
[...Related Items...]     OR:     [Wikipedia]   [Google]   [Baidu]  


Quadratic Form
In mathematics, a quadratic form is a polynomial with terms all of degree two (" form" is another name for a homogeneous polynomial). For example, 4x^2 + 2xy - 3y^2 is a quadratic form in the variables and . The coefficients usually belong to a fixed field , such as the real or complex numbers, and one speaks of a quadratic form ''over'' . Over the reals, a quadratic form is said to be '' definite'' if it takes the value zero only when all its variables are simultaneously zero; otherwise it is '' isotropic''. Quadratic forms occupy a central place in various branches of mathematics, including number theory, linear algebra, group theory ( orthogonal groups), differential geometry (the Riemannian metric, the second fundamental form), differential topology ( intersection forms of manifolds, especially four-manifolds), Lie theory (the Killing form), and statistics (where the exponent of a zero-mean multivariate normal distribution has the quadratic form -\mathbf^\math ...
[...More Info...]      
[...Related Items...]     OR:     [Wikipedia]   [Google]   [Baidu]  


picture info

Convex Polytope
A convex polytope is a special case of a polytope, having the additional property that it is also a convex set contained in the n-dimensional Euclidean space \mathbb^n. Most texts. use the term "polytope" for a bounded convex polytope, and the word "polyhedron" for the more general, possibly unbounded object. Others''Mathematical Programming'', by Melvyn W. Jeter (1986) p. 68/ref> (including this article) allow polytopes to be unbounded. The terms "bounded/unbounded convex polytope" will be used below whenever the boundedness is critical to the discussed issue. Yet other texts identify a convex polytope with its boundary. Convex polytopes play an important role both in various branches of mathematics and in applied areas, most notably in linear programming. In the influential textbooks of Grünbaum and Ziegler on the subject, as well as in many other texts in discrete geometry, convex polytopes are often simply called "polytopes". Grünbaum points out that this is solely to avoid ...
[...More Info...]      
[...Related Items...]     OR:     [Wikipedia]   [Google]   [Baidu]  


picture info

Convex Hull
In geometry, the convex hull, convex envelope or convex closure of a shape is the smallest convex set that contains it. The convex hull may be defined either as the intersection of all convex sets containing a given subset of a Euclidean space, or equivalently as the set of all convex combinations of points in the subset. For a Bounded set, bounded subset of the plane, the convex hull may be visualized as the shape enclosed by a rubber band stretched around the subset. Convex hulls of open sets are open, and convex hulls of compact sets are compact. Every compact convex set is the convex hull of its extreme points. The convex hull operator is an example of a closure operator, and every antimatroid can be represented by applying this closure operator to finite sets of points. The algorithmic problems of finding the convex hull of a finite set of points in the plane or other low-dimensional Euclidean spaces, and its projective duality, dual problem of intersecting Half-space (geome ...
[...More Info...]      
[...Related Items...]     OR:     [Wikipedia]   [Google]   [Baidu]  


Union Bound
In probability theory, Boole's inequality, also known as the union bound, says that for any finite or countable set of events, the probability that at least one of the events happens is no greater than the sum of the probabilities of the individual events. This inequality provides an upper bound on the probability of occurrence of at least one of a countable number of events in terms of the individual probabilities of the events. Boole's inequality is named for its discoverer, George Boole. Formally, for a countable set of events ''A''1, ''A''2, ''A''3, ..., we have :\left(\bigcup_^ A_i \right) \le \sum_^ (A_i). In measure-theoretic terms, Boole's inequality follows from the fact that a measure (and certainly any probability measure) is ''σ''- sub-additive. Thus Boole's inequality holds not only for probability measures , but more generally when is replaced by any finite measure. Proof Proof using induction Boole's inequality may be proved for finite collections of n e ...
[...More Info...]      
[...Related Items...]     OR:     [Wikipedia]   [Google]   [Baidu]