HOME

TheInfoList



OR:

In
probability theory Probability theory is the branch of mathematics concerned with probability. Although there are several different probability interpretations, probability theory treats the concept in a rigorous mathematical manner by expressing it through a set ...
, conditional probability is a measure of the
probability Probability is the branch of mathematics concerning numerical descriptions of how likely an event is to occur, or how likely it is that a proposition is true. The probability of an event is a number between 0 and 1, where, roughly speakin ...
of an
event Event may refer to: Gatherings of people * Ceremony, an event of ritual significance, performed on a special occasion * Convention (meeting), a gathering of individuals engaged in some common interest * Event management, the organization of e ...
occurring, given that another event (by assumption, presumption, assertion or evidence) has already occurred. This particular method relies on event B occurring with some sort of relationship with another event A. In this event, the event B can be analyzed by a conditional probability with respect to A. If the event of interest is and the event is known or assumed to have occurred, "the conditional probability of given ", or "the probability of under the condition ", is usually written as or occasionally . This can also be understood as the fraction of probability B that intersects with A: P(A \mid B) = \frac. For example, the probability that any given person has a cough on any given day may be only 5%. But if we know or assume that the person is sick, then they are much more likely to be coughing. For example, the conditional probability that someone unwell (sick) is coughing might be 75%, in which case we would have that = 5% and = 75 %. Although there is a relationship between and in this example, such a relationship or dependence between and is not necessary, nor do they have to occur simultaneously. may or may not be equal to (the unconditional probability of ). If , then events and are said to be ''independent'': in such a case, knowledge about either event does not alter the likelihood of each other. (the conditional probability of given ) typically differs from . For example, if a person has dengue fever, the person might have a 90% chance of being tested as positive for the disease. In this case, what is being measured is that if event (''having dengue'') has occurred, the probability of (''tested as positive'') given that occurred is 90%, simply writing = 90%. Alternatively, if a person is tested as positive for dengue fever, they may have only a 15% chance of actually having this rare disease due to high
false positive A false positive is an error in binary classification in which a test result incorrectly indicates the presence of a condition (such as a disease when the disease is not present), while a false negative is the opposite error, where the test resul ...
rates. In this case, the probability of the event (''having dengue'') given that the event (''testing positive'') has occurred is 15% or = 15%. It should be apparent now that falsely equating the two probabilities can lead to various errors of reasoning, which is commonly seen through base rate fallacies. While conditional probabilities can provide extremely useful information, limited information is often supplied or at hand. Therefore, it can be useful to reverse or convert a conditional probability using Bayes' theorem: P(A\mid B) = . Another option is to display conditional probabilities in
conditional probability table In statistics, the conditional probability table (CPT) is defined for a set of discrete and mutually dependent random variables to display conditional probabilities of a single variable with respect to the others (i.e., the probability of each po ...
to illuminate the relationship between events.


Definition


Conditioning on an event


Kolmogorov Andrey Nikolaevich Kolmogorov ( rus, Андре́й Никола́евич Колмого́ров, p=ɐnˈdrʲej nʲɪkɐˈlajɪvʲɪtɕ kəlmɐˈɡorəf, a=Ru-Andrey Nikolaevich Kolmogorov.ogg, 25 April 1903 – 20 October 1987) was a Sovi ...
definition

Given two
events Event may refer to: Gatherings of people * Ceremony, an event of ritual significance, performed on a special occasion * Convention (meeting), a gathering of individuals engaged in some common interest * Event management, the organization of ev ...
and from the sigma-field of a probability space, with the unconditional probability of being greater than zero (i.e., , the conditional probability of given (P(A \mid B)) is the probability of ''A'' occurring if ''B'' has or is assumed to have happened. ''A'' is assumed to be the set of all possible outcomes of an experiment or random trial that has a restricted or reduced sample space. The conditional probability can be found by the
quotient In arithmetic, a quotient (from lat, quotiens 'how many times', pronounced ) is a quantity produced by the division of two numbers. The quotient has widespread use throughout mathematics, and is commonly referred to as the integer part of a ...
of the probability of the joint intersection of events and (P(A \cap B))—the probability at which ''A'' and ''B'' occur together, although not necessarily occurring at the same time—and the
probability Probability is the branch of mathematics concerning numerical descriptions of how likely an event is to occur, or how likely it is that a proposition is true. The probability of an event is a number between 0 and 1, where, roughly speakin ...
of : :P(A \mid B) = \frac. For a sample space consisting of equal likelihood outcomes, the probability of the event ''A'' is understood as the fraction of the number of outcomes in ''A'' to the number of all outcomes in the sample space. Then, this equation is understood as the fraction of the set A \cap B to the set ''B''. Note that the above equation is a definition, not just a theoretical result. We denote the quantity \frac as P(A\mid B) and call it the "conditional probability of given ."


As an axiom of probability

Some authors, such as de Finetti, prefer to introduce conditional probability as an axiom of probability: :P(A \cap B) = P(A \mid B)P(B). This equation for a conditional probability, although mathematically equivalent, may be intuitively easier to understand. It can be interpreted as "the probability of ''B'' occurring multiplied by the probability of ''A'' occurring, provided that ''B'' has occurred, is equal to the probability of the ''A'' and ''B'' occurrences together, although not necessarily occurring at the same time". Additionally, this may be preferred philosophically; under major
probability interpretations The word probability has been used in a variety of ways since it was first applied to the mathematical study of games of chance. Does probability measure the real, physical, tendency of something to occur, or is it a measure of how strongly one b ...
, such as the subjective theory, conditional probability is considered a primitive entity. Moreover, this "multiplication rule" can be practically useful in computing the probability of A \cap B and introduces a symmetry with the summation axiom for Poincaré Formula: :P(A \cup B) = P(A) + P(B) - P(A \cap B) :Thus the equations can be combined to find a new representation of the : : P(A \cap B)= P(A) + P(B) - P(A \cup B) = P(A \mid B)P(B) : P(A \cup B)=


As the probability of a conditional event

Conditional probability can be defined as the probability of a conditional event A_B. The Goodman–Nguyen–Van Fraassen conditional event can be defined as: :A_B = \bigcup_ \left( \bigcap_ \overline_j, A_i B_i \right) , where A_i and B_i represent states or elements of ''A'' or ''B.'' It can be shown that :P(A_B)= \frac which meets the Kolmogorov definition of conditional probability.


Conditioning on an event of probability zero

If P(B)=0 , then according to the definition, P(A \mid B) is
undefined Undefined may refer to: Mathematics * Undefined (mathematics), with several related meanings ** Indeterminate form, in calculus Computing * Undefined behavior, computer code whose behavior is not specified under certain conditions * Undefined ...
. The case of greatest interest is that of a random variable , conditioned on a continuous random variable resulting in a particular outcome . The event B = \ has probability zero and, as such, cannot be conditioned on. Instead of conditioning on being ''exactly'' , we could condition on it being closer than distance \epsilon away from . The event B = \ will generally have nonzero probability and hence, can be conditioned on. We can then take the limit :\lim_ P(A \mid x-\epsilon < X < x+\epsilon). For example, if two continuous random variables and have a joint density f_(x,y), then by
L'Hôpital's rule In calculus, l'Hôpital's rule or l'Hospital's rule (, , ), also known as Bernoulli's rule, is a theorem which provides a technique to evaluate limits of indeterminate forms. Application (or repeated application) of the rule often converts an i ...
and
Leibniz integral rule In calculus, the Leibniz integral rule for differentiation under the integral sign, named after Gottfried Leibniz, states that for an integral of the form \int_^ f(x,t)\,dt, where -\infty < a(x), b(x) < \infty and the integral are
, upon differentiation with respect to \epsilon: : \begin \lim_ P(Y \in U \mid x_0-\epsilon < X < x_0+\epsilon) &= \lim_ \frac \\ &= \frac. \end The resulting limit is the
conditional probability distribution In probability theory and statistics, given two jointly distributed random variables X and Y, the conditional probability distribution of Y given X is the probability distribution of Y when X is known to be a particular value; in some cases the ...
of given and exists when the denominator, the probability density f_X(x_0), is strictly positive. It is tempting to ''define'' the undefined probability P(A \mid X=x) using this limit, but this cannot be done in a consistent manner. In particular, it is possible to find random variables and and values , such that the events \ and \ are identical but the resulting limits are not: :\lim_ P(A \mid x-\epsilon \le X \le x+\epsilon) \neq \lim_ P(A \mid w-\epsilon \le W \le w+\epsilon). The
Borel–Kolmogorov paradox In probability theory, the Borel–Kolmogorov paradox (sometimes known as Borel's paradox) is a paradox relating to conditional probability with respect to an event of probability zero (also known as a null set). It is named after Émile Borel and ...
demonstrates this with a geometrical argument.


Conditioning on a discrete random variable

Let be a discrete random variable and its possible outcomes denoted . For example, if represents the value of a rolled die then is the set \. Let us assume for the sake of presentation that is a discrete random variable, so that each value in has a nonzero probability. For a value in and an event , the conditional probability is given by P(A \mid X=x) . Writing :c(x,A) = P(A \mid X=x) for short, we see that it is a function of two variables, and . For a fixed , we can form the random variable Y = c(X, A) . It represents an outcome of P(A \mid X=x) whenever a value of is observed. The conditional probability of given can thus be treated as a random variable with outcomes in the interval ,1/math>. From the
law of total probability In probability theory, the law (or formula) of total probability is a fundamental rule relating marginal probabilities to conditional probabilities. It expresses the total probability of an outcome which can be realized via several distinct eve ...
, its expected value is equal to the unconditional
probability Probability is the branch of mathematics concerning numerical descriptions of how likely an event is to occur, or how likely it is that a proposition is true. The probability of an event is a number between 0 and 1, where, roughly speakin ...
of .


Partial conditional probability

The partial conditional probability P(A\mid B_1 \equiv b_1, \ldots, B_m \equiv b_m) is about the probability of event A given that each of the condition events B_i has occurred to a degree b_i (degree of belief, degree of experience) that might be different from 100%. Frequentistically, partial conditional probability makes sense, if the conditions are tested in experiment repetitions of appropriate length n. Such n-bounded partial conditional probability can be defined as the conditionally expected average occurrence of event A in testbeds of length n that adhere to all of the probability specifications B_i \equiv b_i, i.e.: :P^n(A\mid B_1 \equiv b_1, \ldots, B_m \equiv b_m)= \operatorname E(\overline^n\mid\overline^n_1=b_1, \ldots, \overline^n_m=b_m) Based on that, partial conditional probability can be defined as : P(A\mid B_1 \equiv b_1, \ldots, B_m \equiv b_m) = \lim_ P^n(A\mid B_1 \equiv b_1, \ldots, B_m \equiv b_m), where b_i n \in \mathbb Jeffrey conditionalization is a special case of partial conditional probability, in which the condition events must form a
partition Partition may refer to: Computing Hardware * Disk partitioning, the division of a hard disk drive * Memory partition, a subdivision of a computer's memory, usually for use by a single job Software * Partition (database), the division of a ...
: : P(A\mid B_1 \equiv b_1, \ldots, B_m \equiv b_m) = \sum^m_ b_i P(A\mid B_i)


Example

Suppose that somebody secretly rolls two fair six-sided dice, and we wish to compute the probability that the face-up value of the first one is 2, given the information that their sum is no greater than 5. * Let ''D''1 be the value rolled on die 1. * Let ''D''2 be the value rolled on die 2. ''Probability that'' ''D''1 = 2 Table 1 shows the
sample space In probability theory, the sample space (also called sample description space, possibility space, or outcome space) of an experiment or random trial is the set of all possible outcomes or results of that experiment. A sample space is usually den ...
of 36 combinations of rolled values of the two dice, each of which occurs with probability 1/36, with the numbers displayed in the red and dark gray cells being ''D''1 + ''D''2. ''D''1 = 2 in exactly 6 of the 36 outcomes; thus ''P''(''D''1 = 2) =  = : : ''Probability that'' ''D''1 + ''D''2 ≤ 5 Table 2 shows that ''D''1 + ''D''2 ≤ 5 for exactly 10 of the 36 outcomes, thus ''P''(''D''1 + ''D''2 ≤ 5) = : : ''Probability that'' ''D''1 = 2 ''given that'' ''D''1 + ''D''2 ≤ 5 Table 3 shows that for 3 of these 10 outcomes, ''D''1 = 2. Thus, the conditional probability P(''D''1 = 2 ,  ''D''1+''D''2 ≤ 5) =  = 0.3: : Here, in the earlier notation for the definition of conditional probability, the conditioning event ''B'' is that ''D''1 + ''D''2 ≤ 5, and the event ''A'' is ''D''1 = 2. We have P(A\mid B)=\tfrac = \tfrac=\tfrac, as seen in the table.


Use in inference

In statistical inference, the conditional probability is an update of the probability of an
event Event may refer to: Gatherings of people * Ceremony, an event of ritual significance, performed on a special occasion * Convention (meeting), a gathering of individuals engaged in some common interest * Event management, the organization of e ...
based on new information. The new information can be incorporated as follows: * Let ''A'', the event of interest, be in the
sample space In probability theory, the sample space (also called sample description space, possibility space, or outcome space) of an experiment or random trial is the set of all possible outcomes or results of that experiment. A sample space is usually den ...
, say (''X'',''P''). * The occurrence of the event ''A'' knowing that event ''B'' has or will have occurred, means the occurrence of ''A'' as it is restricted to ''B'', i.e. A \cap B. * Without the knowledge of the occurrence of ''B'', the information about the occurrence of ''A'' would simply be ''P''(''A'') * The probability of ''A'' knowing that event ''B'' has or will have occurred, will be the probability of A \cap B relative to ''P''(''B''), the probability that ''B'' has occurred. * This results in P(A \mid B) = P(A \cap B)/P(B) whenever ''P''(''B'') > 0 and 0 otherwise. This approach results in a probability measure that is consistent with the original probability measure and satisfies all the
Kolmogorov axioms The Kolmogorov axioms are the foundations of probability theory introduced by Russian mathematician Andrey Kolmogorov in 1933. These axioms remain central and have direct contributions to mathematics, the physical sciences, and real-world probabili ...
. This conditional probability measure also could have resulted by assuming that the relative magnitude of the probability of ''A'' with respect to ''X'' will be preserved with respect to ''B'' (cf. a Formal Derivation below). The wording "evidence" or "information" is generally used in the Bayesian interpretation of probability. The conditioning event is interpreted as evidence for the conditioned event. That is, ''P''(''A'') is the probability of ''A'' before accounting for evidence ''E'', and ''P''(''A'', ''E'') is the probability of ''A'' after having accounted for evidence ''E'' or after having updated ''P''(''A''). This is consistent with the frequentist interpretation, which is the first definition given above.


Example

When Morse code is transmitted, there is a certain probability that the "dot" or "dash" that was received is erroneous. This is often taken as interference in the transmission of a message. Therefore, it is important to consider when sending a "dot", for example, the probability that a "dot" was received. This is represented by: P(dot \ sent \mid dot \ received) = P(dot \ received \mid dot \ sent) \frac. In Morse code, the ratio of dots to dashes is 3:4 at the point of sending, so the probability of a "dot" and "dash" are P(dot \ sent) = \frac \ and \ P(dash \ sent) = \frac . If it is assumed that the probability that a dot is transmitted as a dash is 1/10, and that the probability that a dash is transmitted as a dot is likewise 1/10, then Bayes's rule can be used to calculate P(dot \ received). P(dot \ received) = P(dot \ received \ \cap \ dot \ sent ) + P(dot \ received \ \cap \ dash \ sent) P(dot \ received) = P(dot \ received \mid dot \ sent)P(dot \ sent) + P(dot \ received \mid dash \ sent)P(dash \ sent) P(dot \ received) = \frac\times\frac + \frac\times\frac = \frac Now, P(dot \ sent \mid dot \ received) can be calculated: P(dot \ sent \mid dot \ received) = P(dot \ received \mid dot \ sent) \frac = \frac\times \frac = \frac


Statistical independence

Events ''A'' and ''B'' are defined to be
statistically independent Independence is a fundamental notion in probability theory, as in statistics and the theory of stochastic processes. Two events are independent, statistically independent, or stochastically independent if, informally speaking, the occurrence of o ...
if the probability of the intersection of A and B is equal to the product of the probabilities of A and B: :P(A \cap B) = P(A) P(B). If ''P''(''B'') is not zero, then this is equivalent to the statement that :P(A\mid B) = P(A). Similarly, if ''P''(''A'') is not zero, then :P(B\mid A) = P(B) is also equivalent. Although the derived forms may seem more intuitive, they are not the preferred definition as the conditional probabilities may be undefined, and the preferred definition is symmetrical in ''A'' and ''B''. Independence does not refer to a disjoint event. It should also be noted that given the independent event pair Band an event C, the pair is defined to be
conditionally independent In probability theory, conditional independence describes situations wherein an observation is irrelevant or redundant when evaluating the certainty of a hypothesis. Conditional independence is usually formulated in terms of conditional probabi ...
if the product holds true: P(AB \mid C) = P(A \mid C)P(B \mid C) This theorem could be useful in applications where multiple independent events are being observed. Independent events vs. mutually exclusive events The concepts of mutually independent events and
mutually exclusive events In logic and probability theory, two events (or propositions) are mutually exclusive or disjoint if they cannot both occur at the same time. A clear example is the set of outcomes of a single coin toss, which can result in either heads or tails ...
are separate and distinct. The following table contrasts results for the two cases (provided that the probability of the conditioning event is not zero). In fact, mutually exclusive events cannot be statistically independent (unless both of them are impossible), since knowing that one occurs gives information about the other (in particular, that the latter will certainly not occur).


Common fallacies

:''These fallacies should not be confused with Robert K. Shope's 197
"conditional fallacy"
which deals with counterfactual examples that
beg the question In classical rhetoric and logic, begging the question or assuming the conclusion (Latin: ') is an informal fallacy that occurs when an argument's premises assume the truth of the conclusion, instead of supporting it. For example: * "Green is ...
.''


Assuming conditional probability is of similar size to its inverse

In general, it cannot be assumed that ''P''(''A'', ''B'') ≈ ''P''(''B'', ''A''). This can be an insidious error, even for those who are highly conversant with statistics. The relationship between ''P''(''A'', ''B'') and ''P''(''B'', ''A'') is given by Bayes' theorem: :\begin P(B\mid A) &= \frac\\ \Leftrightarrow \frac &= \frac \end That is, P(''A'', ''B'') ≈ P(''B'', ''A'') only if ''P''(''B'')/''P''(''A'') ≈ 1, or equivalently, ''P''(''A'') ≈ ''P''(''B'').


Assuming marginal and conditional probabilities are of similar size

In general, it cannot be assumed that ''P''(''A'') ≈ ''P''(''A'', ''B''). These probabilities are linked through the
law of total probability In probability theory, the law (or formula) of total probability is a fundamental rule relating marginal probabilities to conditional probabilities. It expresses the total probability of an outcome which can be realized via several distinct eve ...
: :P(A) = \sum_n P(A \cap B_n) = \sum_n P(A\mid B_n)P(B_n). where the events (B_n) form a countable
partition Partition may refer to: Computing Hardware * Disk partitioning, the division of a hard disk drive * Memory partition, a subdivision of a computer's memory, usually for use by a single job Software * Partition (database), the division of a ...
of \Omega. This fallacy may arise through selection bias. For example, in the context of a medical claim, let ''S'' be the event that a sequela (chronic disease) ''S'' occurs as a consequence of circumstance (acute condition) ''C''. Let ''H'' be the event that an individual seeks medical help. Suppose that in most cases, ''C'' does not cause ''S'' (so that ''P''(''S'') is low). Suppose also that medical attention is only sought if ''S'' has occurred due to ''C''. From experience of patients, a doctor may therefore erroneously conclude that ''P''(''S'') is high. The actual probability observed by the doctor is ''P''(''S'', ''H'').


Over- or under-weighting priors

Not taking prior probability into account partially or completely is called ''
base rate neglect The base rate fallacy, also called base rate neglect or base rate bias, is a type of fallacy in which people tend to ignore the base rate (i.e., general prevalence) in favor of the individuating information (i.e., information pertaining only to a ...
''. The reverse, insufficient adjustment from the prior probability is ''
conservatism Conservatism is a cultural, social, and political philosophy that seeks to promote and to preserve traditional institutions, practices, and values. The central tenets of conservatism may vary in relation to the culture and civilizati ...
''.


Formal derivation

Formally, ''P''(''A'' ,  ''B'') is defined as the probability of ''A'' according to a new probability function on the sample space, such that outcomes not in ''B'' have probability 0 and that it is consistent with all original probability measures.George Casella and Roger L. Berger (1990), ''Statistical Inference'', Duxbury Press, (p. 18 ''et seq.'')Grinstead and Snell's Introduction to Probability
p. 134
Let Ω be a discrete
sample space In probability theory, the sample space (also called sample description space, possibility space, or outcome space) of an experiment or random trial is the set of all possible outcomes or results of that experiment. A sample space is usually den ...
with elementary events , and let ''P'' be the probability measure with respect to the
σ-algebra In mathematical analysis and in probability theory, a σ-algebra (also σ-field) on a set ''X'' is a collection Σ of subsets of ''X'' that includes the empty subset, is closed under complement, and is closed under countable unions and countabl ...
of Ω. Suppose we are told that the event ''B'' ⊆ Ω has occurred. A new probability distribution (denoted by the conditional notation) is to be assigned on to reflect this. All events that are not in ''B'' will have null probability in the new distribution. For events in ''B'', two conditions must be met: the probability of ''B'' is one and the relative magnitudes of the probabilities must be preserved. The former is required by the axioms of probability, and the latter stems from the fact that the new probability measure has to be the analog of ''P'' in which the probability of ''B'' is one - and every event that is not in ''B'', therefore, has a null probability. Hence, for some scale factor ''α'', the new distribution must satisfy: #\omega \in B : P(\omega\mid B) = \alpha P(\omega) #\omega \notin B : P(\omega\mid B) = 0 #\sum_ = 1. Substituting 1 and 2 into 3 to select ''α'': :\begin 1 &= \sum_ \\ &= \sum_ + \cancelto \\ &= \alpha \sum_ \\ pt &= \alpha \cdot P(B) \\ pt \Rightarrow \alpha &= \frac \end So the new probability distribution is #\omega \in B: P(\omega\mid B) = \frac #\omega \notin B: P(\omega\mid B) = 0 Now for a general event ''A'', :\begin P(A\mid B) &= \sum_ + \cancelto \\ &= \sum_ \\ pt&= \frac \end


See also

* Bayes' theorem *
Bayesian epistemology Bayesian epistemology is a formal approach to various topics in epistemology that has its roots in Thomas Bayes' work in the field of probability theory. One advantage of its formal method in contrast to traditional epistemology is that its concep ...
*
Borel–Kolmogorov paradox In probability theory, the Borel–Kolmogorov paradox (sometimes known as Borel's paradox) is a paradox relating to conditional probability with respect to an event of probability zero (also known as a null set). It is named after Émile Borel and ...
*
Chain rule (probability) In probability theory, the chain rule (also called the general product rule) permits the calculation of any member of the joint distribution of a set of random variables using only conditional probabilities. The rule is useful in the study of Bayes ...
*
Class membership probabilities In machine learning, a probabilistic classifier is a classifier that is able to predict, given an observation of an input, a probability distribution over a set of classes, rather than only outputting the most likely class that the observation sho ...
*
Conditional independence In probability theory, conditional independence describes situations wherein an observation is irrelevant or redundant when evaluating the certainty of a hypothesis. Conditional independence is usually formulated in terms of conditional probabil ...
*
Conditional probability distribution In probability theory and statistics, given two jointly distributed random variables X and Y, the conditional probability distribution of Y given X is the probability distribution of Y when X is known to be a particular value; in some cases the ...
* Conditioning (probability) * Joint probability distribution * Monty Hall problem * Pairwise independent distribution *
Posterior probability The posterior probability is a type of conditional probability that results from updating the prior probability with information summarized by the likelihood via an application of Bayes' rule. From an epistemological perspective, the posterior ...
* Regular conditional probability


References


External links

*
Visual explanation of conditional probability
{{Authority control Mathematical fallacies Statistical ratios