HOME

TheInfoList



OR:

In
probability theory Probability theory is the branch of mathematics concerned with probability. Although there are several different probability interpretations, probability theory treats the concept in a rigorous mathematical manner by expressing it through a set o ...
, conditional independence describes situations wherein an observation is irrelevant or redundant when evaluating the certainty of a hypothesis. Conditional independence is usually formulated in terms of
conditional probability In probability theory, conditional probability is a measure of the probability of an event occurring, given that another event (by assumption, presumption, assertion or evidence) has already occurred. This particular method relies on event B occu ...
, as a special case where the probability of the hypothesis given the uninformative observation is equal to the probability without. If A is the hypothesis, and B and C are observations, conditional independence can be stated as an equality: :P(A\mid B,C) = P(A \mid C) where P(A \mid B, C) is the probability of A given both B and C. Since the probability of A given C is the same as the probability of A given both B and C, this equality expresses that B contributes nothing to the certainty of A. In this case, A and B are said to be conditionally independent given C, written symbolically as: (A \perp\!\!\!\perp B \mid C). The concept of conditional independence is essential to graph-based theories of statistical inference, as it establishes a mathematical relation between a collection of conditional statements and a graphoid.


Conditional independence of events

Let A, B, and C be events. A and B are said to be conditionally independent given C if and only if P(C) > 0 and: :P(A \mid B, C) = P(A \mid C) This property is often written: (A \perp\!\!\!\perp B \mid C), which should be read ((A \perp\!\!\!\perp B) \vert C). Equivalently, conditional independence may be stated as: :P(A,B, C) = P(A, C)P(B, C) where P(A,B, C) is the joint probability of A and B given C. This alternate formulation states that A and B are independent events, given C. It demonstrates that (A \perp\!\!\!\perp B \mid C) is equivalent to (B \perp\!\!\!\perp A \mid C).


Proof of the equivalent definition

:P(A, B \mid C) = P(A\mid C)P(B\mid C) :iff \frac = \left(\frac\right) \left(\frac \right)      (definition of
conditional probability In probability theory, conditional probability is a measure of the probability of an event occurring, given that another event (by assumption, presumption, assertion or evidence) has already occurred. This particular method relies on event B occu ...
) :iff P(A, B, C) = \frac       (multiply both sides by P(C)) :iff \frac= \frac       (divide both sides by P(B, C)) :iff P(A \mid B, C) = P(A \mid C)       (definition of conditional probability) \therefore


Examples

StackExchange provides here some useful examples.


Coloured boxes

Each cell represents a possible outcome. The events \colorR, \colorB and \colorY are represented by the areas shaded , and respectively. The overlap between the events \colorR and \colorB is shaded . The probabilities of these events are shaded areas with respect to the total area. In both examples \colorR and \colorB are conditionally independent given \colorY because: :\Pr(, \mid ) = \Pr( \mid )\Pr( \mid ) but not conditionally independent given \left \text\right/math> because: :\Pr(, \mid \text ) \not= \Pr( \mid \text )\Pr( \mid \text )


Proximity and delays

Let events A and B be defined as the probability that person A and person B will be home in time for dinner where both people are randomly sampled from the entire world. Events A and B can be assumed to be independent i.e. knowledge that A is late has minimal to no change on the probability that B will be late. However, if a third event is introduced, person A and person B live in the same neighborhood, the two events are now considered not conditionally independent. Traffic conditions and weather-related events that might delay person A, might delay person B as well. Given the third event and knowledge that person A was late, the probability that person B will be late does meaningfully change.


Dice rolling

Conditional independence depends on the nature of the third event. If you roll two dice, one may assume that the two dice behave independently of each other. Looking at the results of one dice will not tell you about the result of the second dice. (That is, the two dice are independent.) If, however, the 1st dice's result is a 3, and someone tells you about a third event - that the sum of the two results is even - then this extra unit of information restricts the options for the 2nd result to an odd number. In other words, two events can be independent, but NOT conditionally independent.


Height and vocabulary

Height and vocabulary are dependent since very small people tend to be children, known for their more basic vocabularies. But knowing that two people are 19 years old (i.e., conditional on age) there is no reason to think that one person's vocabulary is larger if we are told that they are taller.


Conditional independence of random variables

Two discrete
random variable A random variable (also called random quantity, aleatory variable, or stochastic variable) is a mathematical formalization of a quantity or object which depends on random events. It is a mapping or a function from possible outcomes (e.g., the p ...
s X and Y are conditionally independent given a third discrete random variable Z if and only if they are
independent Independent or Independents may refer to: Arts, entertainment, and media Artist groups * Independents (artist group), a group of modernist painters based in the New Hope, Pennsylvania, area of the United States during the early 1930s * Independe ...
in their
conditional probability distribution In probability theory and statistics, given two jointly distributed random variables X and Y, the conditional probability distribution of Y given X is the probability distribution of Y when X is known to be a particular value; in some cases the c ...
given Z. That is, X and Y are conditionally independent given Z if and only if, given any value of Z, the probability distribution of X is the same for all values of Y and the probability distribution of Y is the same for all values of X. Formally: where F_(x,y)=\Pr(X \leq x, Y \leq y \mid Z=z) is the conditional
cumulative distribution function In probability theory and statistics, the cumulative distribution function (CDF) of a real-valued random variable X, or just distribution function of X, evaluated at x, is the probability that X will take a value less than or equal to x. Ev ...
of X and Y given Z. Two events R and B are conditionally independent given a σ-algebra \Sigma if :\Pr(R, B \mid \Sigma) = \Pr(R \mid \Sigma)\Pr(B \mid \Sigma) \text where \Pr(A \mid \Sigma) denotes the conditional expectation of the
indicator function In mathematics, an indicator function or a characteristic function of a subset of a set is a function that maps elements of the subset to one, and all other elements to zero. That is, if is a subset of some set , one has \mathbf_(x)=1 if x ...
of the event A, \chi_A, given the sigma algebra \Sigma. That is, :\Pr(A \mid \Sigma) := \operatorname chi_A\mid\Sigma Two random variables X and Y are conditionally independent given a σ-algebra \Sigma if the above equation holds for all R in \sigma(X) and B in \sigma(Y). Two random variables X and Y are conditionally independent given a random variable W if they are independent given ''σ''(''W''): the σ-algebra generated by W. This is commonly written: :X \perp\!\!\!\perp Y \mid W or :X \perp Y \mid W This it read "X is independent of Y, given W"; the conditioning applies to the whole statement: "(X is independent of Y) given W". :(X \perp\!\!\!\perp Y) \mid W This notation extends X \perp\!\!\!\perp Y for "X is
independent Independent or Independents may refer to: Arts, entertainment, and media Artist groups * Independents (artist group), a group of modernist painters based in the New Hope, Pennsylvania, area of the United States during the early 1930s * Independe ...
of Y." If W assumes a countable set of values, this is equivalent to the conditional independence of ''X'' and ''Y'' for the events of the form =w/math>. Conditional independence of more than two events, or of more than two random variables, is defined analogously. The following two examples show that X \perp\!\!\!\perp Y ''neither implies nor is implied by'' (X \perp\!\!\!\perp Y) \mid W. First, suppose W is 0 with probability 0.5 and 1 otherwise. When ''W'' = 0 take X and Y to be independent, each having the value 0 with probability 0.99 and the value 1 otherwise. When W=1, X and Y are again independent, but this time they take the value 1 with probability 0.99. Then (X \perp\!\!\!\perp Y) \mid W. But X and Y are dependent, because Pr(''X'' = 0) < Pr(''X'' = 0, ''Y'' = 0). This is because Pr(''X'' = 0) = 0.5, but if ''Y'' = 0 then it's very likely that ''W'' = 0 and thus that ''X'' = 0 as well, so Pr(''X'' = 0, ''Y'' = 0) > 0.5. For the second example, suppose X \perp\!\!\!\perp Y, each taking the values 0 and 1 with probability 0.5. Let W be the product X \cdot Y. Then when W=0, Pr(''X'' = 0) = 2/3, but Pr(''X'' = 0, ''Y'' = 0) = 1/2, so (X \perp\!\!\!\perp Y) \mid W is false. This is also an example of Explaining Away. See Kevin Murphy's tutorial where X and Y take the values "brainy" and "sporty".


Conditional independence of random vectors

Two random vectors \mathbf=(X_1,\ldots,X_l)^ and \mathbf=(Y_1,\ldots,Y_m)^ are conditionally independent given a third random vector \mathbf=(Z_1,\ldots,Z_n)^ if and only if they are independent in their conditional cumulative distribution given \mathbf. Formally: where \mathbf=(x_1,\ldots,x_l)^, \mathbf=(y_1,\ldots,y_m)^ and \mathbf=(z_1,\ldots,z_n)^ and the conditional cumulative distributions are defined as follows. : \begin F_(\mathbf,\mathbf) &= \Pr(X_1 \leq x_1,\ldots,X_l \leq x_l, Y_1 \leq y_1,\ldots,Y_m \leq y_m \mid Z_1=z_1,\ldots,Z_n=z_n) \\ ptF_(\mathbf) &= \Pr(X_1 \leq x_1,\ldots,X_l \leq x_l \mid Z_1=z_1,\ldots,Z_n=z_n) \\ ptF_(\mathbf) &= \Pr(Y_1 \leq y_1,\ldots,Y_m \leq y_m \mid Z_1=z_1,\ldots,Z_n=z_n) \end


Uses in Bayesian inference

Let ''p'' be the proportion of voters who will vote "yes" in an upcoming
referendum A referendum (plural: referendums or less commonly referenda) is a direct vote by the electorate on a proposal, law, or political issue. This is in contrast to an issue being voted on by a representative. This may result in the adoption of ...
. In taking an
opinion poll An opinion poll, often simply referred to as a survey or a poll (although strictly a poll is an actual election) is a human research survey of public opinion from a particular sample. Opinion polls are usually designed to represent the opinion ...
, one chooses ''n'' voters randomly from the population. For ''i'' = 1, …, ''n'', let ''X''''i'' = 1 or 0 corresponding, respectively, to whether or not the ''i''th chosen voter will or will not vote "yes". In a frequentist approach to statistical inference one would not attribute any probability distribution to ''p'' (unless the probabilities could be somehow interpreted as relative frequencies of occurrence of some event or as proportions of some population) and one would say that ''X''1, …, ''X''''n'' are
independent Independent or Independents may refer to: Arts, entertainment, and media Artist groups * Independents (artist group), a group of modernist painters based in the New Hope, Pennsylvania, area of the United States during the early 1930s * Independe ...
random variables. By contrast, in a Bayesian approach to statistical inference, one would assign a
probability distribution In probability theory and statistics, a probability distribution is the mathematical function that gives the probabilities of occurrence of different possible outcomes for an experiment. It is a mathematical description of a random phenomeno ...
to ''p'' regardless of the non-existence of any such "frequency" interpretation, and one would construe the probabilities as degrees of belief that ''p'' is in any interval to which a probability is assigned. In that model, the random variables ''X''1, …, ''X''''n'' are ''not'' independent, but they are conditionally independent given the value of ''p''. In particular, if a large number of the ''X''s are observed to be equal to 1, that would imply a high
conditional probability In probability theory, conditional probability is a measure of the probability of an event occurring, given that another event (by assumption, presumption, assertion or evidence) has already occurred. This particular method relies on event B occu ...
, given that observation, that ''p'' is near 1, and thus a high
conditional probability In probability theory, conditional probability is a measure of the probability of an event occurring, given that another event (by assumption, presumption, assertion or evidence) has already occurred. This particular method relies on event B occu ...
, given that observation, that the ''next'' ''X'' to be observed will be equal to 1.


Rules of conditional independence

A set of rules governing statements of conditional independence have been derived from the basic definition.J Pearl, Causality: Models, Reasoning, and Inference, 2000, Cambridge University Press The these rules were termed " Graphoid Axioms" by Pearl and Paz, because they hold in graphs, where X \perp\!\!\!\perp A\mid B is interpreted to mean: "All paths from ''X'' to ''A'' are intercepted by the set ''B''".


Symmetry

: X \perp\!\!\!\perp Y \quad \Rightarrow \quad Y \perp\!\!\!\perp X


Decomposition

: X \perp\!\!\!\perp A,B \quad \Rightarrow \quad \text \begin X \perp\!\!\!\perp A \\ X \perp\!\!\!\perp B \end Proof * p_(x,a,b) = p_X(x) p_(a,b)      (meaning of X \perp\!\!\!\perp A,B) * \int_B p_(x,a,b)\,db = \int_B p_X(x) p_(a,b)\,db      (ignore variable ''B'' by integrating it out) * p_(x,a) = p_X(x) p_A(a)      A similar proof shows the independence of ''X'' and ''B''.


Weak union

: X \perp\!\!\!\perp A,B \quad \Rightarrow \quad \text \begin X \perp\!\!\!\perp A \mid B\\ X \perp\!\!\!\perp B \mid A \end Proof * By assumption, \Pr(X) = \Pr(X \mid A, B) . * Due to the property of decomposition X \perp\!\!\!\perp B, \Pr(X) = \Pr(X \mid B). * Combining the above two equalities gives \Pr(X \mid B) = \Pr(X \mid A, B), which establishes X \perp\!\!\!\perp A \mid B. The second condition can be proved similarly.


Contraction

: \left.\begin X \perp\!\!\!\perp A \mid B \\ X \perp\!\!\!\perp B \end\right\}\text \quad \Rightarrow \quad X \perp\!\!\!\perp A,B Proof This property can be proved by noticing \Pr(X\mid A,B) = \Pr(X\mid B) = \Pr(X), each equality of which is asserted by X \perp\!\!\!\perp A \mid B and X \perp\!\!\!\perp B, respectively.


Intersection

For strictly positive probability distributions, the following also holds: : \left.\begin X \perp\!\!\!\perp Y \mid Z, W\\ X \perp\!\!\!\perp W \mid Z, Y \end\right\}\text \quad \Rightarrow \quad X \perp\!\!\!\perp W, Y \mid Z Proof By assumption: : P(X, Z, W, Y) = P(X, Z, W) \land P(X, Z, W, Y) = P(X, Z, Y) \implies P(X, Z, Y) = P(X, Z, W) Using this equality, together with the Law of total probability applied to P(X, Z): : \begin P(X, Z) &= \sum_ P(X, Z, W=w)P(W=w, Z) \\ pt&= \sum_ P(X, Y, Z)P(W=w, Z) \\ pt&= P(X, Z, Y) \sum_ P(W=w, Z) \\ pt&= P(X, Z, Y) \end Since P(X, Z, W, Y) = P(X, Z, Y) and P(X, Z, Y) = P(X, Z), it follows that P(X, Z, W, Y) = P(X, Z) \iff X \perp\!\!\!\perp Y,W , Z. Technical note: since these implications hold for any probability space, they will still hold if one considers a sub-universe by conditioning everything on another variable, say ''K''. For example, X \perp\!\!\!\perp Y \Rightarrow Y \perp\!\!\!\perp X would also mean that X \perp\!\!\!\perp Y \mid K \Rightarrow Y \perp\!\!\!\perp X \mid K.


See also

* Graphoid * Conditional dependence *
de Finetti's theorem In probability theory, de Finetti's theorem states that exchangeable observations are conditionally independent relative to some latent variable. An epistemic probability distribution could then be assigned to this variable. It is named in hono ...
* Conditional expectation


References


External links

* {{DEFAULTSORT:Conditional Independence Independence (probability theory)