HOME

TheInfoList



OR:

In
probability theory Probability theory or probability calculus is the branch of mathematics concerned with probability. Although there are several different probability interpretations, probability theory treats the concept in a rigorous mathematical manner by expre ...
and
information theory Information theory is the mathematical study of the quantification (science), quantification, Data storage, storage, and telecommunications, communication of information. The field was established and formalized by Claude Shannon in the 1940s, ...
, the mutual information (MI) of two
random variable A random variable (also called random quantity, aleatory variable, or stochastic variable) is a Mathematics, mathematical formalization of a quantity or object which depends on randomness, random events. The term 'random variable' in its mathema ...
s is a measure of the mutual dependence between the two variables. More specifically, it quantifies the " amount of information" (in
units Unit may refer to: General measurement * Unit of measurement, a definite magnitude of a physical quantity, defined and adopted by convention or by law **International System of Units (SI), modern form of the metric system **English units, histo ...
such as shannons ( bits), nats or hartleys) obtained about one random variable by observing the other random variable. The concept of mutual information is intimately linked to that of
entropy Entropy is a scientific concept, most commonly associated with states of disorder, randomness, or uncertainty. The term and the concept are used in diverse fields, from classical thermodynamics, where it was first recognized, to the micros ...
of a random variable, a fundamental notion in information theory that quantifies the expected "amount of information" held in a random variable. Not limited to real-valued random variables and linear dependence like the
correlation coefficient A correlation coefficient is a numerical measure of some type of linear correlation, meaning a statistical relationship between two variables. The variables may be two columns of a given data set of observations, often called a sample, or two c ...
, MI is more general and determines how different the joint distribution of the pair (X,Y) is from the product of the marginal distributions of X and Y. MI is the
expected value In probability theory, the expected value (also called expectation, expectancy, expectation operator, mathematical expectation, mean, expectation value, or first Moment (mathematics), moment) is a generalization of the weighted average. Informa ...
of the pointwise mutual information (PMI). The quantity was defined and analyzed by
Claude Shannon Claude Elwood Shannon (April 30, 1916 – February 24, 2001) was an American mathematician, electrical engineer, computer scientist, cryptographer and inventor known as the "father of information theory" and the man who laid the foundations of th ...
in his landmark paper "
A Mathematical Theory of Communication "A Mathematical Theory of Communication" is an article by mathematician Claude E. Shannon published in '' Bell System Technical Journal'' in 1948. It was renamed ''The Mathematical Theory of Communication'' in the 1949 book of the same name, a s ...
", although he did not call it "mutual information". This term was coined later by
Robert Fano Roberto Mario "Robert" Fano (11 November 1917 – 13 July 2016) was an Italian-American computer scientist and professor of electrical engineering and computer science at the Massachusetts Institute of Technology. He became a student and working ...
. Mutual Information is also known as information gain.


Definition

Let (X,Y) be a pair of
random variables A random variable (also called random quantity, aleatory variable, or stochastic variable) is a mathematical formalization of a quantity or object which depends on random events. The term 'random variable' in its mathematical definition refers ...
with values over the space \mathcal\times\mathcal. If their joint distribution is P_ and the marginal distributions are P_X and P_Y, the mutual information is defined as :I(X;Y) = D_( P_ \parallel P_ \otimes P_ ) where D_ is the
Kullback–Leibler divergence In mathematical statistics, the Kullback–Leibler (KL) divergence (also called relative entropy and I-divergence), denoted D_\text(P \parallel Q), is a type of statistical distance: a measure of how much a model probability distribution is diff ...
, and P_ \otimes P_ is the
outer product In linear algebra, the outer product of two coordinate vectors is the matrix whose entries are all products of an element in the first vector with an element in the second vector. If the two coordinate vectors have dimensions ''n'' and ''m'', the ...
distribution which assigns probability P_X(x)\cdot P_Y(y) to each (x,y). Expressed in terms of the
entropy Entropy is a scientific concept, most commonly associated with states of disorder, randomness, or uncertainty. The term and the concept are used in diverse fields, from classical thermodynamics, where it was first recognized, to the micros ...
H(\cdot) and the conditional entropy H(\cdot, \cdot) of the random variables X and Y, one also has (See relation to conditional and joint entropy): :I(X;Y) = H(X) - H(X, Y) = H(Y) - H(Y, X) Notice, as per property of the
Kullback–Leibler divergence In mathematical statistics, the Kullback–Leibler (KL) divergence (also called relative entropy and I-divergence), denoted D_\text(P \parallel Q), is a type of statistical distance: a measure of how much a model probability distribution is diff ...
, that I(X;Y) is equal to zero precisely when the joint distribution coincides with the product of the marginals, i.e. when X and Y are independent (and hence observing Y tells you nothing about X). I(X;Y) is non-negative, it is a measure of the price for encoding (X,Y) as a pair of independent random variables when in reality they are not. If the
natural logarithm The natural logarithm of a number is its logarithm to the base of a logarithm, base of the e (mathematical constant), mathematical constant , which is an Irrational number, irrational and Transcendental number, transcendental number approxima ...
is used, the unit of mutual information is the nat. If the log base 2 is used, the unit of mutual information is the shannon, also known as the bit. If the log base 10 is used, the unit of mutual information is the hartley, also known as the ban or the dit.


In terms of PMFs for discrete distributions

The mutual information of two jointly discrete random variables X and Y is calculated as a double sum: : \operatorname(X; Y) = \sum_ \sum_ , where P_ is the joint probability ''mass'' function of X and Y, and P_X and P_Y are the marginal probability mass functions of X and Y respectively.


In terms of PDFs for continuous distributions

In the case of jointly continuous random variables, the double sum is replaced by a
double integral In mathematics (specifically multivariable calculus), a multiple integral is a definite integral of a function of several real variables, for instance, or . Integrals of a function of two variables over a region in \mathbb^2 (the Real line, r ...
: : \operatorname(X;Y) = \int_ \int_ \; dx \,dy , where P_ is now the joint probability ''density'' function of X and Y, and P_X and P_Y are the marginal probability density functions of X and Y respectively.


Motivation

Intuitively, mutual information measures the information that X and Y share: It measures how much knowing one of these variables reduces uncertainty about the other. For example, if X and Y are independent, then knowing X does not give any information about Y and vice versa, so their mutual information is zero. At the other extreme, if X is a deterministic function of Y and Y is a deterministic function of X then all information conveyed by X is shared with Y: knowing X determines the value of Y and vice versa. As a result, the mutual information is the same as the uncertainty contained in Y (or X) alone, namely the
entropy Entropy is a scientific concept, most commonly associated with states of disorder, randomness, or uncertainty. The term and the concept are used in diverse fields, from classical thermodynamics, where it was first recognized, to the micros ...
of Y (or X). A very special case of this is when X and Y are the same random variable. Mutual information is a measure of the inherent dependence expressed in the joint distribution of X and Y relative to the marginal distribution of X and Y under the assumption of independence. Mutual information therefore measures dependence in the following sense: \operatorname(X;Y) = 0
if and only if In logic and related fields such as mathematics and philosophy, "if and only if" (often shortened as "iff") is paraphrased by the biconditional, a logical connective between statements. The biconditional is true in two cases, where either bo ...
X and Y are independent random variables. This is easy to see in one direction: if X and Y are independent, then p_(x,y)=p_X(x) \cdot p_Y(y), and therefore: : \log = \log 1 = 0. Moreover, mutual information is nonnegative (i.e. \operatorname(X;Y) \ge 0 see below) and
symmetric Symmetry () in everyday life refers to a sense of harmonious and beautiful proportion and balance. In mathematics, the term has a more precise definition and is usually used to refer to an object that is invariant under some transformations ...
(i.e. \operatorname(X;Y) = \operatorname(Y;X) see below).


Properties


Nonnegativity

Using Jensen's inequality on the definition of mutual information we can show that \operatorname(X;Y) is non-negative, i.e. :\operatorname(X;Y) \ge 0


Symmetry

:\operatorname(X;Y) = \operatorname(Y;X) The proof is given considering the relationship with entropy, as shown below.


Supermodularity under independence

If C is independent of (A,B) , then :\operatorname(Y;A,B,C) - \operatorname(Y;A,B) \ge \operatorname(Y;A,C) - \operatorname(Y;A) .


Relation to conditional and joint entropy

Mutual information can be equivalently expressed as: :\begin \operatorname(X;Y) & \equiv \Eta(X) - \Eta(X\mid Y) \\ & \equiv \Eta(Y) - \Eta(Y\mid X) \\ & \equiv \Eta(X) + \Eta(Y) - \Eta(X, Y) \\ & \equiv \Eta(X, Y) - \Eta(X\mid Y) - \Eta(Y\mid X) \end where \Eta(X) and \Eta(Y) are the marginal entropies, \Eta(X\mid Y) and \Eta(Y\mid X) are the conditional entropies, and \Eta(X,Y) is the joint entropy of X and Y. Notice the analogy to the union, difference, and intersection of two sets: in this respect, all the formulas given above are apparent from the Venn diagram reported at the beginning of the article. In terms of a communication channel in which the output Y is a noisy version of the input X, these relations are summarised in the figure: Because \operatorname(X;Y) is non-negative, consequently, \Eta(X) \ge \Eta(X\mid Y). Here we give the detailed deduction of \operatorname(X;Y)=\Eta(Y)-\Eta(Y\mid X) for the case of jointly discrete random variables: : \begin \operatorname(X;Y) & = \sum_ p_(x,y) \log \frac\\ & = \sum_ p_(x,y) \log \frac - \sum_ p_(x,y) \log p_Y(y) \\ & = \sum_ p_X(x)p_(y) \log p_(y) - \sum_ p_(x,y) \log p_Y(y) \\ & = \sum_ p_X(x) \left(\sum_ p_(y) \log p_(y)\right) - \sum_ \left(\sum_ p_(x,y)\right) \log p_Y(y) \\ & = -\sum_ p_X(x) \Eta(Y\mid X=x) - \sum_ p_Y(y) \log p_Y(y) \\ & = -\Eta(Y\mid X) + \Eta(Y) \\ & = \Eta(Y) - \Eta(Y\mid X). \\ \end The proofs of the other identities above are similar. The proof of the general case (not just discrete) is similar, with integrals replacing sums. Intuitively, if entropy \Eta(Y) is regarded as a measure of uncertainty about a random variable, then \Eta(Y\mid X) is a measure of what X does ''not'' say about Y. This is "the amount of uncertainty remaining about Y after X is known", and thus the right side of the second of these equalities can be read as "the amount of uncertainty in Y, minus the amount of uncertainty in Y which remains after X is known", which is equivalent to "the amount of uncertainty in Y which is removed by knowing X". This corroborates the intuitive meaning of mutual information as the amount of information (that is, reduction in uncertainty) that knowing either variable provides about the other. Note that in the discrete case \Eta(Y\mid Y) = 0 and therefore \Eta(Y) = \operatorname(Y;Y). Thus \operatorname(Y; Y) \ge \operatorname(X; Y), and one can formulate the basic principle that a variable contains at least as much information about itself as any other variable can provide.


Relation to Kullback–Leibler divergence

For jointly discrete or jointly continuous pairs (X,Y), mutual information is the
Kullback–Leibler divergence In mathematical statistics, the Kullback–Leibler (KL) divergence (also called relative entropy and I-divergence), denoted D_\text(P \parallel Q), is a type of statistical distance: a measure of how much a model probability distribution is diff ...
from the product of the
marginal distribution In probability theory and statistics, the marginal distribution of a subset of a collection of random variables is the probability distribution of the variables contained in the subset. It gives the probabilities of various values of the variable ...
s, p_X \cdot p_Y, of the joint distribution p_, that is, :\operatorname(X; Y) = D_\text\left(p_ \parallel p_Xp_Y\right) Furthermore, let p_(x,y) =p_(x)* p_Y(y) be the conditional mass or density function. Then, we have the identity :\operatorname(X; Y) = \mathbb_Y\left _\text\!\left(p_ \parallel p_X\right)\right/math> The proof for jointly discrete random variables is as follows: : \begin \operatorname(X; Y) &= \sum_ \sum_ \\ &= \sum_ \sum_ p_(x) p_Y(y) \log \frac \\ &= \sum_ p_Y(y) \sum_ p_(x) \log \frac \\ &= \sum_ p_Y(y) \; D_\text\!\left(p_ \parallel p_X\right) \\ &= \mathbb_Y \left _\text\!\left(p_ \parallel p_X\right)\right \end Similarly this identity can be established for jointly continuous random variables. Note that here the Kullback–Leibler divergence involves integration over the values of the random variable X only, and the expression D_\text(p_ \parallel p_X) still denotes a random variable because Y is random. Thus mutual information can also be understood as the expectation over Y of the Kullback–Leibler divergence of the
conditional distribution Conditional (if then) may refer to: * Causal conditional, if X then Y, where X is a cause of Y *Conditional probability, the probability of an event A given that another event B * Conditional proof, in logic: a proof that asserts a conditional, ...
p_ of X given Y from the
univariate distribution In statistics, a univariate distribution is a probability distribution of only one random variable. This is in contrast to a multivariate distribution, the probability distribution of a random vector (consisting of multiple random variables). Exam ...
p_X of X: the more different the distributions p_ and p_X are on average, the greater the information gain.


Bayesian estimation of mutual information

If samples from a joint distribution are available, a Bayesian approach can be used to estimate the mutual information of that distribution. The first work to do this, which also showed how to do Bayesian estimation of many other information-theoretic properties besides mutual information, was. Subsequent researchers have rederived and extended this analysis. See for a recent paper based on a prior specifically tailored to estimation of mutual information per se. Besides, recently an estimation method accounting for continuous and multivariate outputs, Y, was proposed in .


Independence assumptions

The Kullback-Leibler divergence formulation of the mutual information is predicated on that one is interested in comparing p(x,y) to the fully factorized
outer product In linear algebra, the outer product of two coordinate vectors is the matrix whose entries are all products of an element in the first vector with an element in the second vector. If the two coordinate vectors have dimensions ''n'' and ''m'', the ...
p(x) \cdot p(y). In many problems, such as
non-negative matrix factorization Non-negative matrix factorization (NMF or NNMF), also non-negative matrix approximation is a group of algorithms in multivariate analysis and linear algebra where a matrix is factorized into (usually) two matrices and , with the property th ...
, one is interested in less extreme factorizations; specifically, one wishes to compare p(x,y) to a low-rank matrix approximation in some unknown variable w; that is, to what degree one might have : p(x,y)\approx \sum_w p^\prime (x,w) p^(w,y) Alternately, one might be interested in knowing how much more information p(x,y) carries over its factorization. In such a case, the excess information that the full distribution p(x,y) carries over the matrix factorization is given by the Kullback-Leibler divergence :\operatorname_ = \sum_ \sum_ , The conventional definition of the mutual information is recovered in the extreme case that the process W has only one value for w.


Variations

Several variations on mutual information have been proposed to suit various needs. Among these are normalized variants and generalizations to more than two variables.


Metric

Many applications require a
metric Metric or metrical may refer to: Measuring * Metric system, an internationally adopted decimal system of measurement * An adjective indicating relation to measurement in general, or a noun describing a specific type of measurement Mathematics ...
, that is, a distance measure between pairs of points. The quantity :\begin d(X,Y) &= \Eta(X,Y) - \operatorname(X;Y) \\ &= \Eta(X) + \Eta(Y) - 2\operatorname(X;Y) \\ &= \Eta(X\mid Y) + \Eta(Y\mid X) \\ &= 2\Eta(X,Y) - \Eta(X) - \Eta(Y) \end satisfies the properties of a metric (
triangle inequality In mathematics, the triangle inequality states that for any triangle, the sum of the lengths of any two sides must be greater than or equal to the length of the remaining side. This statement permits the inclusion of Degeneracy (mathematics)#T ...
, non-negativity, indiscernability and symmetry), where equality X=Y is understood to mean that X can be completely determined from Y. This distance metric is also known as the variation of information. If X, Y are discrete random variables then all the entropy terms are non-negative, so 0 \le d(X,Y) \le \Eta(X,Y) and one can define a normalized distance :D(X,Y) = \frac \le 1. Plugging in the definitions shows that :D(X,Y) = 1 - \frac. This is known as the Rajski Distance. In a set-theoretic interpretation of information (see the figure for Conditional entropy), this is effectively the
Jaccard distance The Jaccard index is a statistic used for gauging the similarity and diversity of sample sets. It is defined in general taking the ratio of two sizes (areas or volumes), the intersection size divided by the union size, also called intersection ...
between X and Y. Finally, :D^\prime(X, Y) = 1 - \frac is also a metric.


Conditional mutual information

Sometimes it is useful to express the mutual information of two random variables conditioned on a third. :\operatorname(X;Y, Z) = \mathbb_Z P_ \otimes P_ )/math> For jointly
discrete random variable A random variable (also called random quantity, aleatory variable, or stochastic variable) is a mathematical formalization of a quantity or object which depends on random events. The term 'random variable' in its mathematical definition refers ...
s this takes the form : \operatorname(X;Y, Z) = \sum_ \sum_ \sum_ , which can be simplified as : \operatorname(X;Y, Z) = \sum_ \sum_ \sum_ p_(x,y,z) \log \frac. For jointly
continuous random variable In probability theory and statistics, a probability distribution is a function that gives the probabilities of occurrence of possible events for an experiment. It is a mathematical description of a random phenomenon in terms of its sample spa ...
s this takes the form : \operatorname(X;Y, Z) = \int_ \int_ \int_ dx dy dz, which can be simplified as : \operatorname(X;Y, Z) = \int_ \int_ \int_ p_(x,y,z) \log \frac dx dy dz. Conditioning on a third random variable may either increase or decrease the mutual information, but it is always true that :\operatorname(X;Y, Z) \ge 0 for discrete, jointly distributed random variables X,Y,Z. This result has been used as a basic building block for proving other inequalities in information theory.


Interaction information

Several generalizations of mutual information to more than two random variables have been proposed, such as total correlation (or multi-information) and
dual total correlation In information theory, dual total correlation, information rate, excess entropy,Nihat Ay, E. Olbrich, N. Bertschinger (2001). A unifying framework for complexity measures of finite systems. European Conference on Complex Systemspdf or binding inf ...
. The expression and study of multivariate higher-degree mutual information was achieved in two seemingly independent works: McGill (1954) who called these functions " interaction information", and Hu Kuo Ting (1962). Interaction information is defined for one variable as follows: :\operatorname(X_1) = \Eta(X_1) and for n > 1, : \operatorname(X_1;\,...\,;X_n) = \operatorname(X_1;\,...\,;X_) - \operatorname(X_1;\,...\,;X_\mid X_n). Some authors reverse the order of the terms on the right-hand side of the preceding equation, which changes the sign when the number of random variables is odd. (And in this case, the single-variable expression becomes the negative of the entropy.) Note that : I(X_1;\ldots;X_\mid X_) = \mathbb_ P_ \otimes\cdots\otimes P_ )


Multivariate statistical independence

The multivariate mutual information functions generalize the pairwise independence case that states that X_1, X_2 if and only if I(X_1; X_2) = 0, to arbitrary numerous variable. n variables are mutually independent if and only if the 2^n - n - 1 mutual information functions vanish I(X_1; \ldots; X_k) = 0 with n \ge k \ge 2 (theorem 2). In this sense, the I(X_1; \ldots; X_k) = 0 can be used as a refined statistical independence criterion.


Applications

For 3 variables, Brenner et al. applied multivariate mutual information to
neural coding Neural coding (or neural representation) is a neuroscience field concerned with characterising the hypothetical relationship between the Stimulus (physiology), stimulus and the neuronal responses, and the relationship among the Electrophysiology, e ...
and called its negativity "synergy" and Watkinson et al. applied it to genetic expression. For arbitrary k variables, Tapia et al. applied multivariate mutual information to gene expression. It can be zero, positive, or negative. The positivity corresponds to relations generalizing the pairwise correlations, nullity corresponds to a refined notion of independence, and negativity detects high dimensional "emergent" relations and clusterized datapoints ). One high-dimensional generalization scheme which maximizes the mutual information between the joint distribution and other target variables is found to be useful in
feature selection In machine learning, feature selection is the process of selecting a subset of relevant Feature (machine learning), features (variables, predictors) for use in model construction. Feature selection techniques are used for several reasons: * sim ...
. Mutual information is also used in the area of signal processing as a measure of similarity between two signals. For example, FMI metric is an image fusion performance measure that makes use of mutual information in order to measure the amount of information that the fused image contains about the source images. The
Matlab MATLAB (an abbreviation of "MATrix LABoratory") is a proprietary multi-paradigm programming language and numeric computing environment developed by MathWorks. MATLAB allows matrix manipulations, plotting of functions and data, implementat ...
code for this metric can be found at. A python package for computing all multivariate mutual informations, conditional mutual information, joint entropies, total correlations, information distance in a dataset of n variables is available.


Directed information

Directed information, \operatorname\left(X^n \to Y^n\right), measures the amount of information that flows from the process X^n to Y^n, where X^n denotes the vector X_1, X_2, ..., X_n and Y^n denotes Y_1, Y_2, ..., Y_n. The term ''directed information'' was coined by James Massey and is defined as : \operatorname\left(X^n \to Y^n\right) = \sum_^n \operatorname\left(X^i; Y_i\mid Y^\right) . Note that if n=1, the directed information becomes the mutual information. Directed information has many applications in problems where causality plays an important role, such as capacity of channel with feedback.


Normalized variants

Normalized variants of the mutual information are provided by the ''coefficients of constraint'',
uncertainty coefficient In statistics, the uncertainty coefficient, also called proficiency, entropy coefficient or Theil's U, is a measure of nominal Association (statistics), association. It was first introduced by Henri Theil and is based on the concept of informatio ...
or proficiency: : C_ = \frac ~~~~\mbox~~~~ C_ = \frac. The two coefficients have a value ranging in
, 1 The comma is a punctuation mark that appears in several variants in different languages. Some typefaces render it as a small line, slightly curved or straight, but inclined from the vertical; others give it the appearance of a miniature fille ...
but are not necessarily equal. This measure is not symmetric. If one desires a symmetric measure they can consider the following '' redundancy'' measure: :R = \frac which attains a minimum of zero when the variables are independent and a maximum value of :R_\max = \frac when one variable becomes completely redundant with the knowledge of the other. See also ''
Redundancy (information theory) Redundancy or redundant may refer to: Language * Redundancy (linguistics), information that is expressed more than once Engineering and computer science * Data redundancy, database systems which have a field that is repeated in two or more tab ...
''. Another symmetrical measure is the ''symmetric uncertainty'' , given by :U(X, Y) = 2R = 2\frac which represents the
harmonic mean In mathematics, the harmonic mean is a kind of average, one of the Pythagorean means. It is the most appropriate average for ratios and rate (mathematics), rates such as speeds, and is normally only used for positive arguments. The harmonic mean ...
of the two uncertainty coefficients C_, C_. If we consider mutual information as a special case of the total correlation or
dual total correlation In information theory, dual total correlation, information rate, excess entropy,Nihat Ay, E. Olbrich, N. Bertschinger (2001). A unifying framework for complexity measures of finite systems. European Conference on Complex Systemspdf or binding inf ...
, the normalized version are respectively, :\frac and \frac\; . This normalized version also known as Information Quality Ratio (IQR) which quantifies the amount of information of a variable based on another variable against total uncertainty: : IQR(X, Y) = \operatorname operatorname(X;Y) = \frac = \frac - 1 There's a normalization which derives from first thinking of mutual information as an analogue to
covariance In probability theory and statistics, covariance is a measure of the joint variability of two random variables. The sign of the covariance, therefore, shows the tendency in the linear relationship between the variables. If greater values of one ...
(thus Shannon entropy is analogous to
variance In probability theory and statistics, variance is the expected value of the squared deviation from the mean of a random variable. The standard deviation (SD) is obtained as the square root of the variance. Variance is a measure of dispersion ...
). Then the normalized mutual information is calculated akin to the
Pearson correlation coefficient In statistics, the Pearson correlation coefficient (PCC) is a correlation coefficient that measures linear correlation between two sets of data. It is the ratio between the covariance of two variables and the product of their standard deviatio ...
, : \frac\; .


Weighted variants

In the traditional formulation of the mutual information, : \operatorname(X;Y) = \sum_ \sum_ p(x, y) \log \frac, each ''event'' or ''object'' specified by (x, y) is weighted by the corresponding probability p(x, y). This assumes that all objects or events are equivalent ''apart from'' their probability of occurrence. However, in some applications it may be the case that certain objects or events are more ''significant'' than others, or that certain patterns of association are more semantically important than others. For example, the deterministic mapping \ may be viewed as stronger than the deterministic mapping \, although these relationships would yield the same mutual information. This is because the mutual information is not sensitive at all to any inherent ordering in the variable values (, , ), and is therefore not sensitive at all to the form of the relational mapping between the associated variables. If it is desired that the former relation—showing agreement on all variable values—be judged stronger than the later relation, then it is possible to use the following ''weighted mutual information'' . : \operatorname(X;Y) = \sum_ \sum_ w(x,y) p(x,y) \log \frac, which places a weight w(x,y) on the probability of each variable value co-occurrence, p(x,y). This allows that certain probabilities may carry more or less significance than others, thereby allowing the quantification of relevant ''holistic'' or '' Prägnanz'' factors. In the above example, using larger relative weights for w(1,1), w(2,2), and w(3,3) would have the effect of assessing greater ''informativeness'' for the relation \ than for the relation \, which may be desirable in some cases of pattern recognition, and the like. This weighted mutual information is a form of weighted KL-Divergence, which is known to take negative values for some inputs, and there are examples where the weighted mutual information also takes negative values.


Adjusted mutual information

A probability distribution can be viewed as a
partition of a set In mathematics, a partition of a set is a grouping of its elements into Empty set, non-empty subsets, in such a way that every element is included in exactly one subset. Every equivalence relation on a Set (mathematics), set defines a partitio ...
. One may then ask: if a set were partitioned randomly, what would the distribution of probabilities be? What would the expectation value of the mutual information be? The adjusted mutual information or AMI subtracts the expectation value of the MI, so that the AMI is zero when two different distributions are random, and one when two distributions are identical. The AMI is defined in analogy to the
adjusted Rand index The Rand index or Rand measure (named after William M. Rand) in statistics, and in particular in data clustering, is a measure of the similarity between two data clusterings. A form of the Rand index may be defined that is adjusted for the chance ...
of two different partitions of a set.


Absolute mutual information

Using the ideas of Kolmogorov complexity, one can consider the mutual information of two sequences independent of any probability distribution: : \operatorname_K(X;Y) = K(X) - K(X\mid Y). To establish that this quantity is symmetric up to a logarithmic factor (\operatorname_K(X;Y) \approx \operatorname_K(Y;X)) one requires the chain rule for Kolmogorov complexity . Approximations of this quantity via compression can be used to define a distance measure to perform a
hierarchical clustering In data mining and statistics, hierarchical clustering (also called hierarchical cluster analysis or HCA) is a method of cluster analysis that seeks to build a hierarchy of clusters. Strategies for hierarchical clustering generally fall into two ...
of sequences without having any
domain knowledge Domain knowledge is knowledge of a specific discipline or field in contrast to general (or domain-independent) knowledge. The term is often used in reference to a more general discipline—for example, in describing a software engineer who has ge ...
of the sequences .


Linear correlation

Unlike correlation coefficients, such as the product moment correlation coefficient, mutual information contains information about all dependence—linear and nonlinear—and not just linear dependence as the correlation coefficient measures. However, in the narrow case that the joint distribution for X and Y is a bivariate normal distribution (implying in particular that both marginal distributions are normally distributed), there is an exact relationship between \operatorname and the correlation coefficient \rho . :\operatorname = -\frac \log\left(1 - \rho^2\right) The equation above can be derived as follows for a bivariate Gaussian: :\begin \begin X_1 \\ X_2 \end &\sim \mathcal \left( \begin \mu_1 \\ \mu_2 \end, \Sigma \right),\qquad \Sigma = \begin \sigma^2_1 & \rho\sigma_1\sigma_2 \\ \rho\sigma_1\sigma_2 & \sigma^2_2 \end \\ \Eta(X_i) &= \frac\log\left(2\pi e \sigma_i^2\right) = \frac + \frac\log(2\pi) + \log\left(\sigma_i\right), \quad i\in\ \\ \Eta(X_1, X_2) &= \frac\log\left \Sigma, \right= 1 + \log(2\pi) + \log\left(\sigma_1 \sigma_2\right) + \frac\log\left(1 - \rho^2\right) \\ \end Therefore, : \operatorname\left(X_1; X_2\right) = \Eta\left(X_1\right) + \Eta\left(X_2\right) - \Eta\left(X_1, X_2\right) = -\frac\log\left(1 - \rho^2\right)


For discrete data

When X and Y are limited to be in a discrete number of states, observation data is summarized in a
contingency table In statistics, a contingency table (also known as a cross tabulation or crosstab) is a type of table in a matrix format that displays the multivariate frequency distribution of the variables. They are heavily used in survey research, business int ...
, with row variable X (or i) and column variable Y (or j). Mutual information is one of the measures of association or
correlation In statistics, correlation or dependence is any statistical relationship, whether causal or not, between two random variables or bivariate data. Although in the broadest sense, "correlation" may indicate any type of association, in statistics ...
between the row and column variables. Other measures of association include
Pearson's chi-squared test Pearson's chi-squared test or Pearson's \chi^2 test is a statistical test applied to sets of categorical data to evaluate how likely it is that any observed difference between the sets arose by chance. It is the most widely used of many chi-squa ...
statistics,
G-test In statistics, ''G''-tests are likelihood-ratio or maximum likelihood statistical significance tests that are increasingly being used in situations where chi-squared tests were previously recommended. Formulation The general formula for ''G'' i ...
statistics, etc. In fact, with the same log base, mutual information will be equal to the
G-test In statistics, ''G''-tests are likelihood-ratio or maximum likelihood statistical significance tests that are increasingly being used in situations where chi-squared tests were previously recommended. Formulation The general formula for ''G'' i ...
log-likelihood statistic divided by 2N, where N is the sample size.


Applications

In many applications, one wants to maximize mutual information (thus increasing dependencies), which is often equivalent to minimizing conditional entropy. Examples include: * In search engine technology, mutual information between phrases and contexts is used as a feature for
k-means clustering ''k''-means clustering is a method of vector quantization, originally from signal processing, that aims to partition of a set, partition ''n'' observations into ''k'' clusters in which each observation belongs to the cluster (statistics), cluste ...
to discover semantic clusters (concepts).Parsing a Natural Language Using Mutual Information Statistics
by David M. Magerman and Mitchell P. Marcus
For example, the mutual information of a bigram might be calculated as: : MI(x,y) = \log \frac \approx \log \frac : where f_ is the number of times the bigram xy appears in the corpus, f_ is the number of times the unigram x appears in the corpus, B is the total number of bigrams, and U is the total number of unigrams. * In
telecommunications Telecommunication, often used in its plural form or abbreviated as telecom, is the transmission of information over a distance using electronic means, typically through cables, radio waves, or other communication technologies. These means of ...
, the
channel capacity Channel capacity, in electrical engineering, computer science, and information theory, is the theoretical maximum rate at which information can be reliably transmitted over a communication channel. Following the terms of the noisy-channel coding ...
is equal to the mutual information, maximized over all input distributions. * Discriminative training procedures for
hidden Markov model A hidden Markov model (HMM) is a Markov model in which the observations are dependent on a latent (or ''hidden'') Markov process (referred to as X). An HMM requires that there be an observable process Y whose outcomes depend on the outcomes of X ...
s have been proposed based on the maximum mutual information (MMI) criterion. * RNA secondary structure prediction from a multiple sequence alignment. * Phylogenetic profiling prediction from pairwise present and disappearance of functionally link
gene In biology, the word gene has two meanings. The Mendelian gene is a basic unit of heredity. The molecular gene is a sequence of nucleotides in DNA that is transcribed to produce a functional RNA. There are two types of molecular genes: protei ...
s. * Mutual information has been used as a criterion for
feature selection In machine learning, feature selection is the process of selecting a subset of relevant Feature (machine learning), features (variables, predictors) for use in model construction. Feature selection techniques are used for several reasons: * sim ...
and feature transformations in
machine learning Machine learning (ML) is a field of study in artificial intelligence concerned with the development and study of Computational statistics, statistical algorithms that can learn from data and generalise to unseen data, and thus perform Task ( ...
. It can be used to characterize both the relevance and redundancy of variables, such as the
minimum redundancy feature selection Minimum redundancy feature selection is an algorithm frequently used in a method to accurately identify characteristics of genes and phenotypes and narrow down their relevance and is usually described in its pairing with relevant feature selection a ...
. * Mutual information is used in determining the similarity of two different clusterings of a dataset. As such, it provides some advantages over the traditional Rand index. * Mutual information of words is often used as a significance function for the computation of
collocation In corpus linguistics, a collocation is a series of words or terms that co-occur more often than would be expected by chance. In phraseology, a collocation is a type of compositional phraseme, meaning that it can be understood from the words t ...
s in
corpus linguistics Corpus linguistics is an empirical method for the study of language by way of a text corpus (plural ''corpora''). Corpora are balanced, often stratified collections of authentic, "real world", text of speech or writing that aim to represent a giv ...
. This has the added complexity that no word-instance is an instance to two different words; rather, one counts instances where 2 words occur adjacent or in close proximity; this slightly complicates the calculation, since the expected probability of one word occurring within N words of another, goes up with N * Mutual information is used in
medical imaging Medical imaging is the technique and process of imaging the interior of a body for clinical analysis and medical intervention, as well as visual representation of the function of some organs or tissues (physiology). Medical imaging seeks to revea ...
for
image registration Image registration is the process of transforming different sets of data into one coordinate system. Data may be multiple photographs, data from different sensors, times, depths, or viewpoints. It is used in computer vision, medical imaging, mil ...
. Given a reference image (for example, a brain scan), and a second image which needs to be put into the same
coordinate system In geometry, a coordinate system is a system that uses one or more numbers, or coordinates, to uniquely determine and standardize the position of the points or other geometric elements on a manifold such as Euclidean space. The coordinates are ...
as the reference image, this image is deformed until the mutual information between it and the reference image is maximized. * Detection of phase synchronization in
time series In mathematics, a time series is a series of data points indexed (or listed or graphed) in time order. Most commonly, a time series is a sequence taken at successive equally spaced points in time. Thus it is a sequence of discrete-time data. ...
analysis. * In the infomax method for neural-net and other machine learning, including the infomax-based
Independent component analysis In signal processing, independent component analysis (ICA) is a computational method for separating a multivariate statistics, multivariate signal into additive subcomponents. This is done by assuming that at most one subcomponent is Gaussian and ...
algorithm * Average mutual information in delay embedding theorem is used for determining the ''embedding delay'' parameter. * Mutual information between
genes In biology, the word gene has two meanings. The Mendelian gene is a basic unit of heredity. The molecular gene is a sequence of nucleotides in DNA that is transcribed to produce a functional RNA. There are two types of molecular genes: protei ...
in expression microarray data is used by the ARACNE algorithm for reconstruction of gene networks. * In
statistical mechanics In physics, statistical mechanics is a mathematical framework that applies statistical methods and probability theory to large assemblies of microscopic entities. Sometimes called statistical physics or statistical thermodynamics, its applicati ...
,
Loschmidt's paradox In physics, Loschmidt's paradox (named for Josef Loschmidt), also known as the reversibility paradox, irreversibility paradox, or ' (), is the objection that it should not be possible to deduce an irreversible process from time-symmetric dynamics ...
may be expressed in terms of mutual information. Hugh Everettbr>Theory of the Universal Wavefunction
Thesis, Princeton University, (1956, 1973), pp 1–140 (page 30)
Loschmidt noted that it must be impossible to determine a physical law which lacks time reversal symmetry (e.g. the
second law of thermodynamics The second law of thermodynamics is a physical law based on Universal (metaphysics), universal empirical observation concerning heat and Energy transformation, energy interconversions. A simple statement of the law is that heat always flows spont ...
) only from physical laws which have this symmetry. He pointed out that the H-theorem of Boltzmann made the assumption that the velocities of particles in a gas were permanently uncorrelated, which removed the time symmetry inherent in the H-theorem. It can be shown that if a system is described by a probability density in
phase space The phase space of a physical system is the set of all possible physical states of the system when described by a given parameterization. Each possible state corresponds uniquely to a point in the phase space. For mechanical systems, the p ...
, then Liouville's theorem implies that the joint information (negative of the joint entropy) of the distribution remains constant in time. The joint information is equal to the mutual information plus the sum of all the marginal information (negative of the marginal entropies) for each particle coordinate. Boltzmann's assumption amounts to ignoring the mutual information in the calculation of entropy, which yields the thermodynamic entropy (divided by the Boltzmann constant). * In
stochastic processes In probability theory and related fields, a stochastic () or random process is a mathematical object usually defined as a family of random variables in a probability space, where the index of the family often has the interpretation of time. Stoc ...
coupled to changing environments, mutual information can be used to disentangle internal and effective environmental dependencies. This is particularly useful when a physical system undergoes changes in the parameters describing its dynamics, e.g., changes in temperature. * The mutual information is used to learn the structure of
Bayesian network A Bayesian network (also known as a Bayes network, Bayes net, belief network, or decision network) is a probabilistic graphical model that represents a set of variables and their conditional dependencies via a directed acyclic graph (DAG). Whi ...
s/ dynamic Bayesian networks, which is thought to explain the causal relationship between random variables, as exemplified by the GlobalMIT toolkit: learning the globally optimal dynamic Bayesian network with the Mutual Information Test criterion. * The mutual information is used to quantify information transmitted during the updating procedure in the
Gibbs sampling In statistics, Gibbs sampling or a Gibbs sampler is a Markov chain Monte Carlo (MCMC) algorithm for sampling from a specified multivariate distribution, multivariate probability distribution when direct sampling from the joint distribution is dif ...
algorithm. * Popular cost function in
decision tree learning Decision tree learning is a supervised learning approach used in statistics, data mining and machine learning. In this formalism, a classification or regression decision tree is used as a predictive model to draw conclusions about a set of obser ...
. * The mutual information is used in
cosmology Cosmology () is a branch of physics and metaphysics dealing with the nature of the universe, the cosmos. The term ''cosmology'' was first used in English in 1656 in Thomas Blount's ''Glossographia'', with the meaning of "a speaking of the wo ...
to test the influence of large-scale environments on galaxy properties in the Galaxy Zoo. * The mutual information was used in
Solar Physics Solar physics is the branch of astrophysics that specializes in the study of the Sun. It intersects with many disciplines of pure physics and astrophysics. Because the Sun is uniquely situated for close-range observing (other stars cannot be re ...
to derive the solar
differential rotation Differential rotation is seen when different parts of a rotating object move with different angular velocities (or rates of rotation) at different latitudes and/or depths of the body and/or in time. This indicates that the object is not rigi ...
profile, a travel-time deviation map for sunspots, and a time–distance diagram from quiet-Sun measurements * Used in Invariant Information Clustering to automatically train neural network classifiers and image segmenters given no labelled data.Invariant Information Clustering for Unsupervised Image Classification and Segmentation
by Xu Ji, Joao Henriques and Andrea Vedaldi
* In stochastic dynamical systems with multiple timescales, mutual information has been shown to capture the functional couplings between different temporal scales. Importantly, it was shown that physical interactions may or may not give rise to mutual information, depending on the typical timescale of their dynamics.


See also

* Data differencing * Pointwise mutual information * Quantum mutual information * Specific-information


Notes


References

* * * * * * English translation of original in ''Uspekhi Matematicheskikh Nauk'' 12 (1): 3-52. * * * * David J. C. MacKay.
Information Theory, Inference, and Learning Algorithms
' Cambridge: Cambridge University Press, 2003. (available free online) * * Athanasios Papoulis. ''Probability, Random Variables, and Stochastic Processes'', second edition. New York: McGraw-Hill, 1984. ''(See Chapter 15.)'' * * * * * * {{Authority control Information theory Entropy and information