Cramér's Decomposition Theorem
   HOME





Cramér's Decomposition Theorem
Cramér’s decomposition theorem for a normal distribution is a result of probability theory. It is well known that, given independent normally distributed random variables ξ1, ξ2, their sum is normally distributed as well. It turns out that the converse is also true. The latter result, initially announced by Paul Lévy, has been proved by Harald Cramér. This became a starting point for a new subfield in probability theory, decomposition theory for random variables as sums of independent variables (also known as arithmetic of probabilistic distributions). The precise statement of the theorem Let a random variable ξ be normally distributed and admit a decomposition as a sum ξ=ξ1+ξ2 of two independent random variables. Then the summands ξ1 and ξ2 are normally distributed as well. A proof of Cramér's decomposition theorem uses the theory of entire function In complex analysis, an entire function, also called an integral function, is a complex-valued function that is ...
[...More Info...]      
[...Related Items...]     OR:     [Wikipedia]   [Google]   [Baidu]  


picture info

Independence (probability Theory)
Independence is a fundamental notion in probability theory, as in statistics and the theory of stochastic processes. Two events are independent, statistically independent, or stochastically independent if, informally speaking, the occurrence of one does not affect the probability of occurrence of the other or, equivalently, does not affect the odds. Similarly, two random variables are independent if the realization of one does not affect the probability distribution of the other. When dealing with collections of more than two events, two notions of independence need to be distinguished. The events are called pairwise independent if any two events in the collection are independent of each other, while mutual independence (or collective independence) of events means, informally speaking, that each event is independent of any combination of other events in the collection. A similar notion exists for collections of random variables. Mutual independence implies pairwise independe ...
[...More Info...]      
[...Related Items...]     OR:     [Wikipedia]   [Google]   [Baidu]  


picture info

Normal Distribution
In statistics, a normal distribution or Gaussian distribution is a type of continuous probability distribution for a real-valued random variable. The general form of its probability density function is : f(x) = \frac e^ The parameter \mu is the mean or expectation of the distribution (and also its median and mode), while the parameter \sigma is its standard deviation. The variance of the distribution is \sigma^2. A random variable with a Gaussian distribution is said to be normally distributed, and is called a normal deviate. Normal distributions are important in statistics and are often used in the natural and social sciences to represent real-valued random variables whose distributions are not known. Their importance is partly due to the central limit theorem. It states that, under some conditions, the average of many samples (observations) of a random variable with finite mean and variance is itself a random variable—whose distribution converges to a normal dist ...
[...More Info...]      
[...Related Items...]     OR:     [Wikipedia]   [Google]   [Baidu]  


picture info

Random Variable
A random variable (also called random quantity, aleatory variable, or stochastic variable) is a mathematical formalization of a quantity or object which depends on random events. It is a mapping or a function from possible outcomes (e.g., the possible upper sides of a flipped coin such as heads H and tails T) in a sample space (e.g., the set \) to a measurable space, often the real numbers (e.g., \ in which 1 corresponding to H and -1 corresponding to T). Informally, randomness typically represents some fundamental element of chance, such as in the roll of a dice; it may also represent uncertainty, such as measurement error. However, the interpretation of probability is philosophically complicated, and even in specific cases is not always straightforward. The purely mathematical analysis of random variables is independent of such interpretational difficulties, and can be based upon a rigorous axiomatic setup. In the formal mathematical language of measure theory, a rando ...
[...More Info...]      
[...Related Items...]     OR:     [Wikipedia]   [Google]   [Baidu]  


Paul Lévy (mathematician)
Paul Pierre Lévy (15 September 1886 – 15 December 1971) was a French mathematician who was active especially in probability theory, introducing fundamental concepts such as local time, stable distributions and characteristic functions. Lévy processes, Lévy flights, Lévy measures, Lévy's constant, the Lévy distribution, the Lévy area, the Lévy arcsine law, and the fractal Lévy C curve are named after him. Biography Lévy was born in Paris to a Jewish family which already included several mathematicians. His father Lucien Lévy was an examiner at the École Polytechnique. Lévy attended the École Polytechnique and published his first paper in 1905, at the age of nineteen, while still an undergraduate, in which he introduced the Lévy–Steinitz theorem. His teacher and advisor was Jacques Hadamard. After graduation, he spent a year in military service and then studied for three years at the École des Mines, where he became a professor in 1913. During ...
[...More Info...]      
[...Related Items...]     OR:     [Wikipedia]   [Google]   [Baidu]  


picture info

Harald Cramér
Harald Cramér (; 25 September 1893 – 5 October 1985) was a Swedish mathematician, actuary, and statistician, specializing in mathematical statistics and probabilistic number theory. John Kingman described him as "one of the giants of statistical theory".Kingman 1986, p. 186. Biography Early life Harald Cramér was born in Stockholm, Sweden on 25 September 1893. Cramér remained close to Stockholm for most of his life. He entered the University of Stockholm as an undergraduate in 1912, where he studied mathematics and chemistry. During this period, he was a research assistant under the famous chemist, Hans von Euler-Chelpin, with whom he published his first five articles from 1913 to 1914. Following his lab experience, he began to focus solely on mathematics. He eventually began his work on his doctoral studies in mathematics which were supervised by Marcel Riesz at the University of Stockholm. Also influenced by G. H. Hardy, Cramér's research led to a PhD in 1917 ...
[...More Info...]      
[...Related Items...]     OR:     [Wikipedia]   [Google]   [Baidu]  


Entire Function
In complex analysis, an entire function, also called an integral function, is a complex-valued function that is holomorphic on the whole complex plane. Typical examples of entire functions are polynomials and the exponential function, and any finite sums, products and compositions of these, such as the trigonometric functions sine and cosine and their hyperbolic counterparts sinh and cosh, as well as derivatives and integrals of entire functions such as the error function. If an entire function has a root at , then , taking the limit value at , is an entire function. On the other hand, the natural logarithm, the reciprocal function, and the square root are all not entire functions, nor can they be continued analytically to an entire function. A transcendental entire function is an entire function that is not a polynomial. Properties Every entire function can be represented as a power series f(z) = \sum_^\infty a_n z^n that converges everywhere in the complex plane, hence ...
[...More Info...]      
[...Related Items...]     OR:     [Wikipedia]   [Google]   [Baidu]  


Raikov's Theorem
Raikov’s theorem, named for Russian mathematician Dmitrii Abramovich Raikov, is a result in probability theory. It is well known that if each of two independent random variables ξ1 and ξ2 has a Poisson distribution In probability theory and statistics, the Poisson distribution is a discrete probability distribution that expresses the probability of a given number of events occurring in a fixed interval of time or space if these events occur with a known ..., then their sum ξ=ξ1+ξ2 has a Poisson distribution as well. It turns out that the converse is also valid. Statement of the theorem Suppose that a random variable ξ has Poisson's distribution and admits a decomposition as a sum ξ=ξ1+ξ2 of two independent random variables. Then the distribution of each summand is a shifted Poisson's distribution. Comment Raikov's theorem is similar to Cramér’s decomposition theorem. The latter result claims that if a sum of two independent random variables has normal di ...
[...More Info...]      
[...Related Items...]     OR:     [Wikipedia]   [Google]   [Baidu]  


picture info

Probability Theorems
Probability is the branch of mathematics concerning numerical descriptions of how likely an event is to occur, or how likely it is that a proposition is true. The probability of an event is a number between 0 and 1, where, roughly speaking, 0 indicates impossibility of the event and 1 indicates certainty."Kendall's Advanced Theory of Statistics, Volume 1: Distribution Theory", Alan Stuart and Keith Ord, 6th Ed, (2009), .William Feller, ''An Introduction to Probability Theory and Its Applications'', (Vol 1), 3rd Ed, (1968), Wiley, . The higher the probability of an event, the more likely it is that the event will occur. A simple example is the tossing of a fair (unbiased) coin. Since the coin is fair, the two outcomes ("heads" and "tails") are both equally probable; the probability of "heads" equals the probability of "tails"; and since no other outcomes are possible, the probability of either "heads" or "tails" is 1/2 (which could also be written as 0.5 or 50%). These con ...
[...More Info...]      
[...Related Items...]     OR:     [Wikipedia]   [Google]   [Baidu]  


picture info

Theorems In Statistics
In mathematics, a theorem is a statement that has been proved, or can be proved. The ''proof'' of a theorem is a logical argument that uses the inference rules of a deductive system to establish that the theorem is a logical consequence of the axioms and previously proved theorems. In the mainstream of mathematics, the axioms and the inference rules are commonly left implicit, and, in this case, they are almost always those of Zermelo–Fraenkel set theory with the axiom of choice, or of a less powerful theory, such as Peano arithmetic. A notable exception is Wiles's proof of Fermat's Last Theorem, which involves the Grothendieck universes whose existence requires the addition of a new axiom to the set theory. Generally, an assertion that is explicitly called a theorem is a proved result that is not an immediate consequence of other known theorems. Moreover, many authors qualify as ''theorems'' only the most important results, and use the terms ''lemma'', ''proposition'' and ...
[...More Info...]      
[...Related Items...]     OR:     [Wikipedia]   [Google]   [Baidu]