HOME

TheInfoList



OR:

In
information theory Information theory is the mathematical study of the quantification (science), quantification, Data storage, storage, and telecommunications, communication of information. The field was established and formalized by Claude Shannon in the 1940s, ...
, the conditional entropy quantifies the amount of information needed to describe the outcome of a
random variable A random variable (also called random quantity, aleatory variable, or stochastic variable) is a Mathematics, mathematical formalization of a quantity or object which depends on randomness, random events. The term 'random variable' in its mathema ...
Y given that the value of another random variable X is known. Here, information is measured in shannons, nats, or hartleys. The ''entropy of Y conditioned on X'' is written as \Eta(Y, X).


Definition

The conditional entropy of Y given X is defined as :\Eta(Y, X)\ = -\sum_p(x,y)\log \frac where \mathcal X and \mathcal Y denote the support sets of X and Y. ''Note:'' Here, the convention is that the expression 0 \log 0 should be treated as being equal to zero. This is because \lim_ \theta\, \log \theta = 0. Intuitively, notice that by definition of
expected value In probability theory, the expected value (also called expectation, expectancy, expectation operator, mathematical expectation, mean, expectation value, or first Moment (mathematics), moment) is a generalization of the weighted average. Informa ...
and of
conditional probability In probability theory, conditional probability is a measure of the probability of an Event (probability theory), event occurring, given that another event (by assumption, presumption, assertion or evidence) is already known to have occurred. This ...
, \displaystyle H(Y, X) can be written as H(Y, X) = \mathbb (X,Y)/math>, where f is defined as \displaystyle f(x,y) := -\log\left(\frac\right) = -\log(p(y, x)). One can think of \displaystyle f as associating each pair \displaystyle (x, y) with a quantity measuring the information content of \displaystyle (Y=y) given \displaystyle (X=x). This quantity is directly related to the amount of information needed to describe the event \displaystyle (Y=y) given (X=x). Hence by computing the expected value of \displaystyle f over all pairs of values (x, y) \in \mathcal \times \mathcal, the conditional entropy \displaystyle H(Y, X) measures how much information, on average, the variable X encodes about Y .


Motivation

Let \Eta(Y, X=x) be the
entropy Entropy is a scientific concept, most commonly associated with states of disorder, randomness, or uncertainty. The term and the concept are used in diverse fields, from classical thermodynamics, where it was first recognized, to the micros ...
of the discrete random variable Y conditioned on the discrete random variable X taking a certain value x. Denote the support sets of X and Y by \mathcal X and \mathcal Y. Let Y have
probability mass function In probability and statistics, a probability mass function (sometimes called ''probability function'' or ''frequency function'') is a function that gives the probability that a discrete random variable is exactly equal to some value. Sometimes i ...
p_Y. The unconditional entropy of Y is calculated as \Eta(Y) := \mathbb operatorname(Y)/math>, i.e. :\Eta(Y) = \sum_ = -\sum_ , where \operatorname(y_i) is the information content of the outcome of Y taking the value y_i. The entropy of Y conditioned on X taking the value x is defined by: :\Eta(Y, X=x) = -\sum_ . Note that \Eta(Y, X) is the result of averaging \Eta(Y, X=x) over all possible values x that X may take. Also, if the above sum is taken over a sample y_1, \dots, y_n, the expected value E_X \Eta(y_1, \dots, y_n \mid X = x)/math> is known in some domains as . Given discrete random variables X with image \mathcal X and Y with image \mathcal Y, the conditional entropy of Y given X is defined as the weighted sum of \Eta(Y, X=x) for each possible value of x, using p(x) as the weights: : \begin \Eta(Y, X)\ &\equiv \sum_\,p(x)\,\Eta(Y, X=x)\\ & =-\sum_ p(x)\sum_\,p(y, x)\,\log_2\, p(y, x)\\ & =-\sum_\,p(x)p(y, x)\,\log_2\,p(y, x)\\ & =-\sum_p(x,y)\log_2 \frac . \end


Properties


Conditional entropy equals zero

:\Eta(Y, X)=0 if and only if the value of Y is completely determined by the value of X.


Conditional entropy of independent random variables

Conversely, \Eta(Y, X) = \Eta(Y) if and only if Y and X are independent random variables.


Chain rule

Assume that the combined system determined by two random variables X and Y has joint entropy \Eta(X,Y), that is, we need \Eta(X,Y) bits of information on average to describe its exact state. Now if we first learn the value of X, we have gained \Eta(X) bits of information. Once X is known, we only need \Eta(X,Y)-\Eta(X) bits to describe the state of the whole system. This quantity is exactly \Eta(Y, X), which gives the ''chain rule'' of conditional entropy: :\Eta(Y, X)\, = \, \Eta(X,Y)- \Eta(X). The chain rule follows from the above definition of conditional entropy: :\begin \Eta(Y, X) &= \sum_p(x,y)\log \left(\frac \right) \\ pt &= \sum_p(x,y)(\log (p(x)) - \log (p(x,y))) \\ pt &= -\sum_p(x,y)\log (p(x,y)) + \sum_ \\ pt & = \Eta(X,Y) + \sum_ p(x)\log (p(x) ) \\ pt & = \Eta(X,Y) - \Eta(X). \end In general, a chain rule for multiple random variables holds: : \Eta(X_1,X_2,\ldots,X_n) = \sum_^n \Eta(X_i , X_1, \ldots, X_) It has a similar form to
chain rule In calculus, the chain rule is a formula that expresses the derivative of the Function composition, composition of two differentiable functions and in terms of the derivatives of and . More precisely, if h=f\circ g is the function such that h ...
in probability theory, except that addition instead of multiplication is used.


Bayes' rule

Bayes' rule for conditional entropy states :\Eta(Y, X) \,=\, \Eta(X, Y) - \Eta(X) + \Eta(Y). ''Proof.'' \Eta(Y, X) = \Eta(X,Y) - \Eta(X) and \Eta(X, Y) = \Eta(Y,X) - \Eta(Y). Symmetry entails \Eta(X,Y) = \Eta(Y,X). Subtracting the two equations implies Bayes' rule. If Y is conditionally independent of Z given X we have: :\Eta(Y, X,Z) \,=\, \Eta(Y, X).


Other properties

For any X and Y: :\begin \Eta(Y, X) &\le \Eta(Y) \, \\ \Eta(X,Y) &= \Eta(X, Y) + \Eta(Y, X) + \operatorname(X;Y),\qquad \\ \Eta(X,Y) &= \Eta(X) + \Eta(Y) - \operatorname(X;Y),\, \\ \operatorname(X;Y) &\le \Eta(X),\, \end where \operatorname(X;Y) is the
mutual information In probability theory and information theory, the mutual information (MI) of two random variables is a measure of the mutual Statistical dependence, dependence between the two variables. More specifically, it quantifies the "Information conten ...
between X and Y. For independent X and Y: :\Eta(Y, X) = \Eta(Y) and \Eta(X, Y) = \Eta(X) \, Although the specific-conditional entropy \Eta(X, Y=y) can be either less or greater than \Eta(X) for a given random variate y of Y, \Eta(X, Y) can never exceed \Eta(X).


Conditional differential entropy


Definition

The above definition is for discrete random variables. The continuous version of discrete conditional entropy is called ''conditional differential (or continuous) entropy''. Let X and Y be a continuous random variables with a joint probability density function f(x,y). The differential conditional entropy h(X, Y) is defined as :h(X, Y) = -\int_ f(x,y)\log f(x, y)\,dx dy.


Properties

In contrast to the conditional entropy for discrete random variables, the conditional differential entropy may be negative. As in the discrete case there is a chain rule for differential entropy: :h(Y, X)\,=\,h(X,Y)-h(X) Notice however that this rule may not be true if the involved differential entropies do not exist or are infinite. Joint differential entropy is also used in the definition of the
mutual information In probability theory and information theory, the mutual information (MI) of two random variables is a measure of the mutual Statistical dependence, dependence between the two variables. More specifically, it quantifies the "Information conten ...
between continuous random variables: :\operatorname(X,Y)=h(X)-h(X, Y)=h(Y)-h(Y, X) :h(X, Y) \le h(X) with equality if and only if X and Y are independent.


Relation to estimator error

The conditional differential entropy yields a lower bound on the expected squared error of an estimator. For any Gaussian random variable X, observation Y and estimator \widehat the following holds: :\mathbb\left bigl(X - \widehat\bigr)^2\right \ge \frace^ This is related to the
uncertainty principle The uncertainty principle, also known as Heisenberg's indeterminacy principle, is a fundamental concept in quantum mechanics. It states that there is a limit to the precision with which certain pairs of physical properties, such as position a ...
from
quantum mechanics Quantum mechanics is the fundamental physical Scientific theory, theory that describes the behavior of matter and of light; its unusual characteristics typically occur at and below the scale of atoms. Reprinted, Addison-Wesley, 1989, It is ...
.


Generalization to quantum theory

In quantum information theory, the conditional entropy is generalized to the conditional quantum entropy. The latter can take negative values, unlike its classical counterpart.


See also

*
Entropy (information theory) In information theory, the entropy of a random variable quantifies the average level of uncertainty or information associated with the variable's potential states or possible outcomes. This measures the expected amount of information needed ...
*
Mutual information In probability theory and information theory, the mutual information (MI) of two random variables is a measure of the mutual Statistical dependence, dependence between the two variables. More specifically, it quantifies the "Information conten ...
* Conditional quantum entropy * Variation of information * Entropy power inequality *
Likelihood function A likelihood function (often simply called the likelihood) measures how well a statistical model explains observed data by calculating the probability of seeing that data under different parameter values of the model. It is constructed from the ...


References

{{Reflist Entropy and information Information theory