HOME

TheInfoList



OR:

Independence is a fundamental notion in
probability theory Probability theory or probability calculus is the branch of mathematics concerned with probability. Although there are several different probability interpretations, probability theory treats the concept in a rigorous mathematical manner by expre ...
, as in
statistics Statistics (from German language, German: ', "description of a State (polity), state, a country") is the discipline that concerns the collection, organization, analysis, interpretation, and presentation of data. In applying statistics to a s ...
and the theory of stochastic processes. Two events are independent, statistically independent, or stochastically independent if, informally speaking, the occurrence of one does not affect the probability of occurrence of the other or, equivalently, does not affect the
odds In probability theory, odds provide a measure of the probability of a particular outcome. Odds are commonly used in gambling and statistics. For example for an event that is 40% probable, one could say that the odds are or When gambling, o ...
. Similarly, two
random variable A random variable (also called random quantity, aleatory variable, or stochastic variable) is a Mathematics, mathematical formalization of a quantity or object which depends on randomness, random events. The term 'random variable' in its mathema ...
s are independent if the realization of one does not affect the
probability distribution In probability theory and statistics, a probability distribution is a Function (mathematics), function that gives the probabilities of occurrence of possible events for an Experiment (probability theory), experiment. It is a mathematical descri ...
of the other. When dealing with collections of more than two events, two notions of independence need to be distinguished. The events are called pairwise independent if any two events in the collection are independent of each other, while mutual independence (or collective independence) of events means, informally speaking, that each event is independent of any combination of other events in the collection. A similar notion exists for collections of random variables. Mutual independence implies pairwise independence, but not the other way around. In the standard literature of probability theory, statistics, and stochastic processes, independence without further qualification usually refers to mutual independence.


Definition


For events


Two events

Two events A and B are independent (often written as A \perp B or A \perp\!\!\!\perp B, where the latter symbol often is also used for conditional independence) if and only if their joint probability equals the product of their probabilities: A \cap B \neq \emptyset indicates that two independent events A and B have common elements in their sample space so that they are not mutually exclusive (mutually exclusive iff A \cap B = \emptyset). Why this defines independence is made clear by rewriting with conditional probabilities P(A \mid B) = \frac as the probability at which the event A occurs provided that the event B has or is assumed to have occurred: :\mathrm(A \cap B) = \mathrm(A)\mathrm(B) \iff \mathrm(A\mid B) = \frac = \mathrm(A). and similarly :\mathrm(A \cap B) = \mathrm(A)\mathrm(B) \iff\mathrm(B\mid A) = \frac = \mathrm(B). Thus, the occurrence of B does not affect the probability of A, and vice versa. In other words, A and B are independent of each other. Although the derived expressions may seem more intuitive, they are not the preferred definition, as the conditional probabilities may be undefined if \mathrm(A) or \mathrm(B) are 0. Furthermore, the preferred definition makes clear by symmetry that when A is independent of B, B is also independent of A.


Odds

Stated in terms of
odds In probability theory, odds provide a measure of the probability of a particular outcome. Odds are commonly used in gambling and statistics. For example for an event that is 40% probable, one could say that the odds are or When gambling, o ...
, two events are independent if and only if the odds ratio of and is unity (1). Analogously with probability, this is equivalent to the conditional odds being equal to the unconditional odds: :O(A \mid B) = O(A) \text O(B \mid A) = O(B), or to the odds of one event, given the other event, being the same as the odds of the event, given the other event not occurring: :O(A \mid B) = O(A \mid \neg B) \text O(B \mid A) = O(B \mid \neg A). The odds ratio can be defined as :O(A \mid B) : O(A \mid \neg B), or symmetrically for odds of given , and thus is 1 if and only if the events are independent.


More than two events

A finite set of events \ _^ is pairwise independent if every pair of events is independent—that is, if and only if for all distinct pairs of indices m,k, A finite set of events is mutually independent if every event is independent of any intersection of the other events—that is, if and only if for every k \leq n and for every k indices 1\le i_1 < \dots < i_k \le n, This is called the ''multiplication rule'' for independent events. It is not a single condition involving only the product of all the probabilities of all single events; it must hold true for all subsets of events. For more than two events, a mutually independent set of events is (by definition) pairwise independent; but the converse is not necessarily true.


Log probability and information content

Stated in terms of log probability, two events are independent if and only if the log probability of the joint event is the sum of the log probability of the individual events: :\log \mathrm(A \cap B) = \log \mathrm(A) + \log \mathrm(B) In
information theory Information theory is the mathematical study of the quantification (science), quantification, Data storage, storage, and telecommunications, communication of information. The field was established and formalized by Claude Shannon in the 1940s, ...
, negative log probability is interpreted as information content, and thus two events are independent if and only if the information content of the combined event equals the sum of information content of the individual events: :\mathrm(A \cap B) = \mathrm(A) + \mathrm(B) See ' for details.


For real valued random variables


Two random variables

Two random variables X and Y are independent
if and only if In logic and related fields such as mathematics and philosophy, "if and only if" (often shortened as "iff") is paraphrased by the biconditional, a logical connective between statements. The biconditional is true in two cases, where either bo ...
(iff) the elements of the -system generated by them are independent; that is to say, for every x and y, the events \ and \ are independent events (as defined above in ). That is, X and Y with cumulative distribution functions F_X(x) and F_Y(y), are independent
iff In logic and related fields such as mathematics and philosophy, "if and only if" (often shortened as "iff") is paraphrased by the biconditional, a logical connective between statements. The biconditional is true in two cases, where either both ...
the combined random variable (X,Y) has a joint cumulative distribution function or equivalently, if the probability densities f_X(x) and f_Y(y) and the joint probability density f_(x,y) exist, :f_(x,y) = f_X(x) f_Y(y) \quad \text x,y.


More than two random variables

A finite set of n random variables \ is pairwise independent if and only if every pair of random variables is independent. Even if the set of random variables is pairwise independent, it is not necessarily ''mutually independent'' as defined next. A finite set of n random variables \ is mutually independent if and only if for any sequence of numbers \, the events \, \ldots, \ are mutually independent events (as defined above in ). This is equivalent to the following condition on the joint cumulative distribution function A finite set of n random variables \ is mutually independent if and only if It is not necessary here to require that the probability distribution factorizes for all possible subsets as in the case for n events. This is not required because e.g. F_(x_1,x_2,x_3) = F_(x_1) \cdot F_(x_2) \cdot F_(x_3) implies F_(x_1,x_3) = F_(x_1) \cdot F_(x_3). The measure-theoretically inclined reader may prefer to substitute events \ for events \ in the above definition, where A is any
Borel set In mathematics, a Borel set is any subset of a topological space that can be formed from its open sets (or, equivalently, from closed sets) through the operations of countable union, countable intersection, and relative complement. Borel sets ...
. That definition is exactly equivalent to the one above when the values of the random variables are
real number In mathematics, a real number is a number that can be used to measure a continuous one- dimensional quantity such as a duration or temperature. Here, ''continuous'' means that pairs of values can have arbitrarily small differences. Every re ...
s. It has the advantage of working also for complex-valued random variables or for random variables taking values in any measurable space (which includes
topological space In mathematics, a topological space is, roughly speaking, a Geometry, geometrical space in which Closeness (mathematics), closeness is defined but cannot necessarily be measured by a numeric Distance (mathematics), distance. More specifically, a to ...
s endowed by appropriate σ-algebras).


For real valued random vectors

Two random vectors \mathbf=(X_1,\ldots,X_m)^\mathrm and \mathbf=(Y_1,\ldots,Y_n)^\mathrm are called independent if where F_(\mathbf) and F_(\mathbf) denote the cumulative distribution functions of \mathbf and \mathbf and F_(\mathbf) denotes their joint cumulative distribution function. Independence of \mathbf and \mathbf is often denoted by \mathbf \perp\!\!\!\perp \mathbf. Written component-wise, \mathbf and \mathbf are called independent if :F_(x_1,\ldots,x_m,y_1,\ldots,y_n) = F_(x_1,\ldots,x_m) \cdot F_(y_1,\ldots,y_n) \quad \text x_1,\ldots,x_m,y_1,\ldots,y_n.


For stochastic processes


For one stochastic process

The definition of independence may be extended from random vectors to a stochastic process. Therefore, it is required for an independent stochastic process that the random variables obtained by sampling the process at any n times t_1,\ldots,t_n are independent random variables for any n. Formally, a stochastic process \left\_ is called independent, if and only if for all n\in \mathbb and for all t_1,\ldots,t_n\in\mathcal where Independence of a stochastic process is a property ''within'' a stochastic process, not between two stochastic processes.


For two stochastic processes

Independence of two stochastic processes is a property between two stochastic processes \left\_ and \left\_ that are defined on the same probability space (\Omega,\mathcal,P). Formally, two stochastic processes \left\_ and \left\_ are said to be independent if for all n\in \mathbb and for all t_1,\ldots,t_n\in\mathcal, the random vectors (X(t_1),\ldots,X(t_n)) and (Y(t_1),\ldots,Y(t_n)) are independent, i.e. if


Independent σ-algebras

The definitions above ( and ) are both generalized by the following definition of independence for σ-algebras. Let (\Omega, \Sigma, \mathrm) be a probability space and let \mathcal and \mathcal be two sub-σ-algebras of \Sigma. \mathcal and \mathcal are said to be independent if, whenever A \in \mathcal and B \in \mathcal, :\mathrm(A \cap B) = \mathrm(A) \mathrm(B). Likewise, a finite family of σ-algebras (\tau_i)_, where I is an index set, is said to be independent if and only if :\forall \left(A_i\right)_ \in \prod\nolimits_\tau_i \ : \ \mathrm\left(\bigcap\nolimits_A_i\right) = \prod\nolimits_\mathrm\left(A_i\right) and an infinite family of σ-algebras is said to be independent if all its finite subfamilies are independent. The new definition relates to the previous ones very directly: * Two events are independent (in the old sense)
if and only if In logic and related fields such as mathematics and philosophy, "if and only if" (often shortened as "iff") is paraphrased by the biconditional, a logical connective between statements. The biconditional is true in two cases, where either bo ...
the σ-algebras that they generate are independent (in the new sense). The σ-algebra generated by an event E \in \Sigma is, by definition, ::\sigma(\) = \. * Two random variables X and Y defined over \Omega are independent (in the old sense) if and only if the σ-algebras that they generate are independent (in the new sense). The σ-algebra generated by a random variable X taking values in some measurable space S consists, by definition, of all subsets of \Omega of the form X^(U), where U is any measurable subset of S. Using this definition, it is easy to show that if X and Y are random variables and Y is constant, then X and Y are independent, since the σ-algebra generated by a constant random variable is the trivial σ-algebra \. Probability zero events cannot affect independence so independence also holds if Y is only Pr- almost surely constant.


Properties


Self-independence

Note that an event is independent of itself if and only if :\mathrm(A) = \mathrm(A \cap A) = \mathrm(A) \cdot \mathrm(A) \iff \mathrm(A) = 0 \text \mathrm(A) = 1. Thus an event is independent of itself if and only if it almost surely occurs or its complement almost surely occurs; this fact is useful when proving zero–one laws.


Expectation and covariance

If X and Y are statistically independent random variables, then the expectation operator \operatorname has the property :\operatorname ^n Y^m= \operatorname ^n\operatorname ^m and the covariance \operatorname ,Y/math> is zero, as follows from :\operatorname ,Y= \operatorname Y- \operatorname \operatorname The converse does not hold: if two random variables have a covariance of 0 they still may be not independent. Similarly for two stochastic processes \left\_ and \left\_: If they are independent, then they are uncorrelated.


Characteristic function

Two random variables X and Y are independent if and only if the characteristic function of the random vector (X,Y) satisfies :\varphi_(t,s) = \varphi_(t)\cdot \varphi_(s). In particular the characteristic function of their sum is the product of their marginal characteristic functions: :\varphi_(t) = \varphi_X(t)\cdot\varphi_Y(t), though the reverse implication is not true. Random variables that satisfy the latter condition are called subindependent.


Examples


Rolling dice

The event of getting a 6 the first time a die is rolled and the event of getting a 6 the second time are ''independent''. By contrast, the event of getting a 6 the first time a die is rolled and the event that the sum of the numbers seen on the first and second trial is 8 are ''not'' independent.


Drawing cards

If two cards are drawn ''with'' replacement from a deck of cards, the event of drawing a red card on the first trial and that of drawing a red card on the second trial are ''independent''. By contrast, if two cards are drawn ''without'' replacement from a deck of cards, the event of drawing a red card on the first trial and that of drawing a red card on the second trial are ''not'' independent, because a deck that has had a red card removed has proportionately fewer red cards.


Pairwise and mutual independence

Consider the two probability spaces shown. In both cases, \mathrm(A) = \mathrm(B) = 1/2 and \mathrm(C) = 1/4. The events in the first space are pairwise independent because \mathrm(A, B) = \mathrm(A, C)=1/2=\mathrm(A), \mathrm(B, A) = \mathrm(B, C)=1/2=\mathrm(B), and \mathrm(C, A) = \mathrm(C, B)=1/4=\mathrm(C); but the three events are not mutually independent. The events in the second space are both pairwise independent and mutually independent. To illustrate the difference, consider conditioning on two events. In the pairwise independent case, although any one event is independent of each of the other two individually, it is not independent of the intersection of the other two: :\mathrm(A, BC) = \frac = \tfrac \ne \mathrm(A) :\mathrm(B, AC) = \frac = \tfrac \ne \mathrm(B) :\mathrm(C, AB) = \frac = \tfrac \ne \mathrm(C) In the mutually independent case, however, :\mathrm(A, BC) = \frac = \tfrac = \mathrm(A) :\mathrm(B, AC) = \frac = \tfrac = \mathrm(B) :\mathrm(C, AB) = \frac = \tfrac = \mathrm(C)


Triple-independence but no pairwise-independence

It is possible to create a three-event example in which :\mathrm(A \cap B \cap C) = \mathrm(A)\mathrm(B)\mathrm(C), and yet no two of the three events are pairwise independent (and hence the set of events are not mutually independent). This example shows that mutual independence involves requirements on the products of probabilities of all combinations of events, not just the single events as in this example.


Conditional independence


For events

The events A and B are conditionally independent given an event C when \mathrm(A \cap B \mid C) = \mathrm(A \mid C) \cdot \mathrm(B \mid C).


For random variables

Intuitively, two random variables X and Y are conditionally independent given Z if, once Z is known, the value of Y does not add any additional information about X. For instance, two measurements X and Y of the same underlying quantity Z are not independent, but they are conditionally independent given Z (unless the errors in the two measurements are somehow connected). The formal definition of conditional independence is based on the idea of conditional distributions. If X, Y, and Z are discrete random variables, then we define X and Y to be conditionally independent given Z if :\mathrm(X \le x, Y \le y\;, \;Z = z) = \mathrm(X \le x\;, \;Z = z) \cdot \mathrm(Y \le y\;, \;Z = z) for all x, y and z such that \mathrm(Z=z)>0. On the other hand, if the random variables are continuous and have a joint probability density function f_(x,y,z), then X and Y are conditionally independent given Z if :f_(x, y , z) = f_(x , z) \cdot f_(y , z) for all real numbers x, y and z such that f_Z(z)>0. If discrete X and Y are conditionally independent given Z, then :\mathrm(X = x , Y = y , Z = z) = \mathrm(X = x , Z = z) for any x, y and z with \mathrm(Z=z)>0. That is, the conditional distribution for X given Y and Z is the same as that given Z alone. A similar equation holds for the conditional probability density functions in the continuous case. Independence can be seen as a special kind of conditional independence, since probability can be seen as a kind of conditional probability given no events.


History

Before 1933, independence, in probability theory, was defined in a verbal manner. For example, de Moivre gave the following definition: “Two events are independent, when they have no connexion one with the other, and that the happening of one neither forwards nor obstructs the happening of the other”. If there are n independent events, the probability of the event, that all of them happen was computed as the product of the probabilities of these n events. Apparently, there was the conviction, that this formula was a consequence of the above definition. (Sometimes this was called the Multiplication Theorem.), Of course, a proof of his assertion cannot work without further more formal tacit assumptions. The definition of independence, given in this article, became the standard definition (now used in all books) after it appeared in 1933 as part of Kolmogorov's axiomatization of probability. Kolmogorov credited it to S.N. Bernstein, and quoted a publication which had appeared in Russian in 1927. Unfortunately, both Bernstein and Kolmogorov had not been aware of the work of the Georg Bohlmann. Bohlmann had given the same definition for two events in 1901 and for n events in 1908 In the latter paper, he studied his notion in detail. For example, he gave the first example showing that pairwise independence does not imply mutual independence. Even today, Bohlmann is rarely quoted. More about his work can be found in ''On the contributions of Georg Bohlmann to probability theory'' from :de:Ulrich Krengel. :de:Ulrich Krengel: On the contributions of Georg Bohlmann to probability theory (PDF; 6,4 MB), Electronic Journal for History of Probability and Statistics, 2011.


See also

* Copula (statistics) * Independent and identically distributed random variables * Mean dependence * Normally distributed and uncorrelated does not imply independent


References


External links

*{{Commons category-inline Experiment (probability theory)