HOME

TheInfoList



OR:

In
information theory Information theory is the mathematical study of the quantification (science), quantification, Data storage, storage, and telecommunications, communication of information. The field was established and formalized by Claude Shannon in the 1940s, ...
, the entropy of a
random variable A random variable (also called random quantity, aleatory variable, or stochastic variable) is a Mathematics, mathematical formalization of a quantity or object which depends on randomness, random events. The term 'random variable' in its mathema ...
quantifies the average level of uncertainty or information associated with the variable's potential states or possible outcomes. This measures the expected amount of information needed to describe the state of the variable, considering the distribution of probabilities across all potential states. Given a discrete random variable X, which may be any member x within the set \mathcal and is distributed according to p\colon \mathcal\to , 1/math>, the entropy is \Eta(X) := -\sum_ p(x) \log p(x), where \Sigma denotes the sum over the variable's possible values. The choice of base for \log, the
logarithm In mathematics, the logarithm of a number is the exponent by which another fixed value, the base, must be raised to produce that number. For example, the logarithm of to base is , because is to the rd power: . More generally, if , the ...
, varies for different applications. Base 2 gives the unit of bits (or " shannons"), while base ''e'' gives "natural units" nat, and base 10 gives units of "dits", "bans", or " hartleys". An equivalent definition of entropy is the
expected value In probability theory, the expected value (also called expectation, expectancy, expectation operator, mathematical expectation, mean, expectation value, or first Moment (mathematics), moment) is a generalization of the weighted average. Informa ...
of the self-information of a variable. The concept of information entropy was introduced by
Claude Shannon Claude Elwood Shannon (April 30, 1916 – February 24, 2001) was an American mathematician, electrical engineer, computer scientist, cryptographer and inventor known as the "father of information theory" and the man who laid the foundations of th ...
in his 1948 paper " A Mathematical Theory of Communication",
PDF
archived fro
here
)

PDF
archived fro
here
)
and is also referred to as Shannon entropy. Shannon's theory defines a
data communication Data communication, including data transmission and data reception, is the transfer of data, transmitted and received over a point-to-point or point-to-multipoint communication channel. Examples of such channels are copper wires, optic ...
system composed of three elements: a source of data, a
communication channel A communication channel refers either to a physical transmission medium such as a wire, or to a logical connection over a multiplexed medium such as a radio channel in telecommunications and computer networking. A channel is used for infor ...
, and a receiver. The "fundamental problem of communication" – as expressed by Shannon – is for the receiver to be able to identify what data was generated by the source, based on the signal it receives through the channel. Shannon considered various ways to encode, compress, and transmit messages from a data source, and proved in his source coding theorem that the entropy represents an absolute mathematical limit on how well data from the source can be losslessly compressed onto a perfectly noiseless channel. Shannon strengthened this result considerably for noisy channels in his
noisy-channel coding theorem In information theory, the noisy-channel coding theorem (sometimes Shannon's theorem or Shannon's limit), establishes that for any given degree of noise contamination of a communication channel, it is possible (in theory) to communicate discrete ...
. Entropy in information theory is directly analogous to the
entropy Entropy is a scientific concept, most commonly associated with states of disorder, randomness, or uncertainty. The term and the concept are used in diverse fields, from classical thermodynamics, where it was first recognized, to the micros ...
in statistical thermodynamics. The analogy results when the values of the random variable designate energies of microstates, so Gibbs's formula for the entropy is formally identical to Shannon's formula. Entropy has relevance to other areas of mathematics such as
combinatorics Combinatorics is an area of mathematics primarily concerned with counting, both as a means and as an end to obtaining results, and certain properties of finite structures. It is closely related to many other areas of mathematics and has many ...
and
machine learning Machine learning (ML) is a field of study in artificial intelligence concerned with the development and study of Computational statistics, statistical algorithms that can learn from data and generalise to unseen data, and thus perform Task ( ...
. The definition can be derived from a set of
axiom An axiom, postulate, or assumption is a statement that is taken to be true, to serve as a premise or starting point for further reasoning and arguments. The word comes from the Ancient Greek word (), meaning 'that which is thought worthy or ...
s establishing that entropy should be a measure of how informative the average outcome of a variable is. For a continuous random variable, differential entropy is analogous to entropy. The definition \mathbb \log p(X) generalizes the above.


Introduction

The core idea of information theory is that the "informational value" of a communicated message depends on the degree to which the content of the message is surprising. If a highly likely event occurs, the message carries very little information. On the other hand, if a highly unlikely event occurs, the message is much more informative. For instance, the knowledge that some particular number ''will not'' be the winning number of a lottery provides very little information, because any particular chosen number will almost certainly not win. However, knowledge that a particular number ''will'' win a lottery has high informational value because it communicates the occurrence of a very low probability event. The '' information content,'' also called the ''surprisal'' or ''self-information,'' of an event E is a function that increases as the probability p(E) of an event decreases. When p(E) is close to 1, the surprisal of the event is low, but if p(E) is close to 0, the surprisal of the event is high. This relationship is described by the function \log\left(\frac\right) , where \log is the
logarithm In mathematics, the logarithm of a number is the exponent by which another fixed value, the base, must be raised to produce that number. For example, the logarithm of to base is , because is to the rd power: . More generally, if , the ...
, which gives 0 surprise when the probability of the event is 1. In fact, is the only function that satisfies а specific set of conditions defined in section '. Hence, we can define the information, or surprisal, of an event E by I(E) = \log\left(\frac\right) , or equivalently, I(E) = -\log(p(E)) . Entropy measures the expected (i.e., average) amount of information conveyed by identifying the outcome of a random trial. This implies that rolling a die has higher entropy than tossing a coin because each outcome of a die toss has smaller probability (p=1/6) than each outcome of a coin toss (p=1/2). Consider a coin with probability of landing on heads and probability of landing on tails. The maximum surprise is when , for which one outcome is not expected over the other. In this case a coin flip has an entropy of one bit (similarly, one trit with equiprobable values contains \log_2 3 (about 1.58496) bits of information because it can have one of three values). The minimum surprise is when (impossibility) or (certainty) and the entropy is zero bits. When the entropy is zero, sometimes referred to as unity, there is no uncertainty at all – no freedom of choice – no
information Information is an Abstraction, abstract concept that refers to something which has the power Communication, to inform. At the most fundamental level, it pertains to the Interpretation (philosophy), interpretation (perhaps Interpretation (log ...
. Other values of ''p'' give entropies between zero and one bits.


Example

Information theory is useful to calculate the smallest amount of information required to convey a message, as in
data compression In information theory, data compression, source coding, or bit-rate reduction is the process of encoding information using fewer bits than the original representation. Any particular compression is either lossy or lossless. Lossless compressi ...
. For example, consider the transmission of sequences comprising the 4 characters 'A', 'B', 'C', and 'D' over a binary channel. If all 4 letters are equally likely (25%), one cannot do better than using two bits to encode each letter. 'A' might code as '00', 'B' as '01', 'C' as '10', and 'D' as '11'. However, if the probabilities of each letter are unequal, say 'A' occurs with 70% probability, 'B' with 26%, and 'C' and 'D' with 2% each, one could assign variable length codes. In this case, 'A' would be coded as '0', 'B' as '10', 'C' as '110', and 'D' as '111'. With this representation, 70% of the time only one bit needs to be sent, 26% of the time two bits, and only 4% of the time 3 bits. On average, fewer than 2 bits are required since the entropy is lower (owing to the high prevalence of 'A' followed by 'B' – together 96% of characters). The calculation of the sum of probability-weighted log probabilities measures and captures this effect. English text, treated as a string of characters, has fairly low entropy; i.e. it is fairly predictable. We can be fairly certain that, for example, 'e' will be far more common than 'z', that the combination 'qu' will be much more common than any other combination with a 'q' in it, and that the combination 'th' will be more common than 'z', 'q', or 'qu'. After the first few letters one can often guess the rest of the word. English text has between 0.6 and 1.3 bits of entropy per character of the message.Schneier, B: ''Applied Cryptography'', Second edition, John Wiley and Sons.


Definition

Named after Boltzmann's Η-theorem, Shannon defined the entropy (Greek capital letter eta) of a discrete random variable X, which takes values in the set \mathcal and is distributed according to p: \mathcal \to , 1/math> such that p(x) := \mathbb = x/math>: \Eta(X) = \mathbb operatorname(X)= \mathbb \log p(X) Here \mathbb is the expected value operator, and is the information content of . \operatorname(X) is itself a random variable. The entropy can explicitly be written as: \Eta(X) = -\sum_ p(x)\log_b p(x) , where is the base of the logarithm used. Common values of are 2, Euler's number , and 10, and the corresponding units of entropy are the bits for , nats for , and bans for . In the case of p(x) = 0 for some x \in \mathcal, the value of the corresponding summand is taken to be , which is consistent with the limit: \lim_ p \log (p) = 0. One may also define the conditional entropy of two variables X and Y taking values from sets \mathcal and \mathcal respectively, as: \Eta(X, Y)=-\sum_ p_(x,y)\log\frac , where p_(x,y) := \mathbb =x,Y=y/math> and p_Y(y) = \mathbb = y/math>. This quantity should be understood as the remaining randomness in the random variable X given the random variable Y.


Measure theory

Entropy can be formally defined in the language of
measure theory In mathematics, the concept of a measure is a generalization and formalization of geometrical measures (length, area, volume) and other common notions, such as magnitude (mathematics), magnitude, mass, and probability of events. These seemingl ...
as follows: Let (X, \Sigma, \mu) be a
probability space In probability theory, a probability space or a probability triple (\Omega, \mathcal, P) is a mathematical construct that provides a formal model of a random process or "experiment". For example, one can define a probability space which models ...
. Let A \in \Sigma be an event. The surprisal of A is \sigma_\mu(A) = -\ln \mu(A) . The ''expected'' surprisal of A is h_\mu(A) = \mu(A) \sigma_\mu(A) . A \mu-almost partition is a set family P \subseteq \mathcal(X) such that \mu(\mathop P) = 1 and \mu(A \cap B) = 0 for all distinct A, B \in P. (This is a relaxation of the usual conditions for a partition.) The entropy of P is \Eta_\mu(P) = \sum_ h_\mu(A) . Let M be a sigma-algebra on X. The entropy of M is \Eta_\mu(M) = \sup_ \Eta_\mu(P) . Finally, the entropy of the probability space is \Eta_\mu(\Sigma), that is, the entropy with respect to \mu of the sigma-algebra of ''all'' measurable subsets of X.


Example

Consider tossing a coin with known, not necessarily fair, probabilities of coming up heads or tails; this can be modeled as a Bernoulli process. The entropy of the unknown result of the next toss of the coin is maximized if the coin is fair (that is, if heads and tails both have equal probability 1/2). This is the situation of maximum uncertainty as it is most difficult to predict the outcome of the next toss; the result of each toss of the coin delivers one full bit of information. This is because \begin \Eta(X) &= -\sum_^n \\ &= -\sum_^2 \\ &= -\sum_^2 = 1. \end However, if we know the coin is not fair, but comes up heads or tails with probabilities and , where , then there is less uncertainty. Every time it is tossed, one side is more likely to come up than the other. The reduced uncertainty is quantified in a lower entropy: on average each toss of the coin delivers less than one full bit of information. For example, if = 0.7, then \begin \Eta(X) &= - p \log_2 p - q \log_2 q \\ ex&= - 0.7 \log_2 (0.7) - 0.3 \log_2 (0.3) \\ ex&\approx - 0.7 \cdot (-0.515) - 0.3 \cdot (-1.737) \\ ex&= 0.8816 < 1. \end Uniform probability yields maximum uncertainty and therefore maximum entropy. Entropy, then, can only decrease from the value associated with uniform probability. The extreme case is that of a double-headed coin that never comes up tails, or a double-tailed coin that never results in a head. Then there is no uncertainty. The entropy is zero: each toss of the coin delivers no new information as the outcome of each coin toss is always certain.


Characterization

To understand the meaning of , first define an information function in terms of an event with probability . The amount of information acquired due to the observation of event follows from Shannon's solution of the fundamental properties of
information Information is an Abstraction, abstract concept that refers to something which has the power Communication, to inform. At the most fundamental level, it pertains to the Interpretation (philosophy), interpretation (perhaps Interpretation (log ...
: # is monotonically decreasing in : an increase in the probability of an event decreases the information from an observed event, and vice versa. # : events that always occur do not communicate information. # : the information learned from independent events is the sum of the information learned from each event. # is a twice continuously differentiable function of p. Given two independent events, if the first event can yield one of equiprobable outcomes and another has one of equiprobable outcomes then there are equiprobable outcomes of the joint event. This means that if bits are needed to encode the first value and to encode the second, one needs to encode both. Shannon discovered that a suitable choice of \operatorname is given by: \operatorname(p) = \log\left(\tfrac\right) = -\log(p). In fact, the only possible values of \operatorname are \operatorname(u) = k \log u for k<0. Additionally, choosing a value for is equivalent to choosing a value x>1 for k = - 1/\log x, so that corresponds to the base for the logarithm. Thus, entropy is characterized by the above four properties. The different
units of information A unit of information is any unit of measure of digital data size. In digital computing, a unit of information is used to describe the capacity of a digital data storage device. In telecommunications, a unit of information is used to describe ...
( bits for the binary logarithm , nats for the
natural logarithm The natural logarithm of a number is its logarithm to the base of a logarithm, base of the e (mathematical constant), mathematical constant , which is an Irrational number, irrational and Transcendental number, transcendental number approxima ...
, bans for the decimal logarithm and so on) are constant multiples of each other. For instance, in case of a fair coin toss, heads provides bit of information, which is approximately 0.693 nats or 0.301 decimal digits. Because of additivity, tosses provide bits of information, which is approximately nats or decimal digits. The ''meaning'' of the events observed (the meaning of ''messages'') does not matter in the definition of entropy. Entropy only takes into account the probability of observing a specific event, so the information it encapsulates is information about the underlying
probability distribution In probability theory and statistics, a probability distribution is a Function (mathematics), function that gives the probabilities of occurrence of possible events for an Experiment (probability theory), experiment. It is a mathematical descri ...
, not the meaning of the events themselves.


Alternative characterization

Another characterization of entropy uses the following properties. We denote and . # Continuity: should be continuous, so that changing the values of the probabilities by a very small amount should only change the entropy by a small amount. # Symmetry: should be unchanged if the outcomes are re-ordered. That is, \Eta_n\left(p_1, p_2, \ldots, p_n \right) = \Eta_n\left(p_, p_, \ldots, p_ \right) for any
permutation In mathematics, a permutation of a set can mean one of two different things: * an arrangement of its members in a sequence or linear order, or * the act or process of changing the linear order of an ordered set. An example of the first mean ...
\ of \. # Maximum: \Eta_n should be maximal if all the outcomes are equally likely i.e. \Eta_n(p_1,\ldots,p_n) \le \Eta_n\left(\frac, \ldots, \frac\right). # Increasing number of outcomes: for equiprobable events, the entropy should increase with the number of outcomes i.e. \Eta_n\bigg(\underbrace_\bigg) < \Eta_\bigg(\underbrace_\bigg). # Additivity: given an ensemble of uniformly distributed elements that are partitioned into boxes (sub-systems) with elements each, the entropy of the whole ensemble should be equal to the sum of the entropy of the system of boxes and the individual entropies of the boxes, each weighted with the probability of being in that particular box.


Discussion

The rule of additivity has the following consequences: for
positive integers In mathematics, the natural numbers are the numbers 0, 1, 2, 3, and so on, possibly excluding 0. Some start counting with 0, defining the natural numbers as the non-negative integers , while others start with 1, defining them as the positiv ...
where , \Eta_n\left(\frac, \ldots, \frac\right) = \Eta_k\left(\frac, \ldots, \frac\right) + \sum_^k \frac \, \Eta_\left(\frac, \ldots, \frac\right). Choosing , this implies that the entropy of a certain outcome is zero: . This implies that the efficiency of a source set with symbols can be defined simply as being equal to its -ary entropy. See also
Redundancy (information theory) Redundancy or redundant may refer to: Language * Redundancy (linguistics), information that is expressed more than once Engineering and computer science * Data redundancy, database systems which have a field that is repeated in two or more tab ...
. The characterization here imposes an additive property with respect to a partition of a set. Meanwhile, the
conditional probability In probability theory, conditional probability is a measure of the probability of an Event (probability theory), event occurring, given that another event (by assumption, presumption, assertion or evidence) is already known to have occurred. This ...
is defined in terms of a multiplicative property, P(A\mid B)\cdot P(B)=P(A\cap B). Observe that a logarithm mediates between these two operations. The conditional entropy and related quantities inherit simple relation, in turn. The measure theoretic definition in the previous section defined the entropy as a sum over expected surprisals \mu(A)\cdot \ln\mu(A) for an extremal partition. Here the logarithm is ad hoc and the entropy is not a measure in itself. At least in the information theory of a binary string, \log_2 lends itself to practical interpretations. Motivated by such relations, a plethora of related and competing quantities have been defined. For example, David Ellerman's analysis of a "logic of partitions" defines a competing measure in structures dual to that of subsets of a universal set. Information is quantified as "dits" (distinctions), a measure on partitions. "Dits" can be converted into Shannon's bits, to get the formulas for conditional entropy, and so on.


Alternative characterization via additivity and subadditivity

Another succinct axiomatic characterization of Shannon entropy was given by Aczél, Forte and Ng, via the following properties: # Subadditivity: \Eta(X,Y) \le \Eta(X)+\Eta(Y) for jointly distributed random variables X,Y. # Additivity: \Eta(X,Y) = \Eta(X)+\Eta(Y) when the random variables X,Y are independent. # Expansibility: \Eta_(p_1, \ldots, p_n, 0) = \Eta_n(p_1, \ldots, p_n), i.e., adding an outcome with probability zero does not change the entropy. # Symmetry: \Eta_n(p_1, \ldots, p_n) is invariant under permutation of p_1, \ldots, p_n. # Small for small probabilities: \lim_ \Eta_2(1-q, q) = 0.


Discussion

It was shown that any function \Eta satisfying the above properties must be a constant multiple of Shannon entropy, with a non-negative constant. Compared to the previously mentioned characterizations of entropy, this characterization focuses on the properties of entropy as a function of random variables (subadditivity and additivity), rather than the properties of entropy as a function of the probability vector p_1,\ldots ,p_n. It is worth noting that if we drop the "small for small probabilities" property, then \Eta must be a non-negative linear combination of the Shannon entropy and the Hartley entropy.


Further properties

The Shannon entropy satisfies the following properties, for some of which it is useful to interpret entropy as the expected amount of information learned (or uncertainty eliminated) by revealing the value of a random variable : * Adding or removing an event with probability zero does not contribute to the entropy: \Eta_(p_1,\ldots,p_n,0) = \Eta_n(p_1,\ldots,p_n). * The maximal entropy of an event with ''n'' different outcomes is : it is attained by the uniform probability distribution. That is, uncertainty is maximal when all possible events are equiprobable: \Eta(p_1,\dots,p_n) \leq \log_b n. * The entropy or the amount of information revealed by evaluating (that is, evaluating and simultaneously) is equal to the information revealed by conducting two consecutive experiments: first evaluating the value of , then revealing the value of given that you know the value of . This may be written as: \Eta(X,Y)=\Eta(X, Y)+\Eta(Y)=\Eta(Y, X)+\Eta(X). * If Y=f(X) where f is a function, then \Eta(f(X), X) = 0. Applying the previous formula to \Eta(X,f(X)) yields \Eta(X)+\Eta(f(X), X)=\Eta(f(X))+\Eta(X, f(X)), so \Eta(f(X)) \le \Eta(X), the entropy of a variable can only decrease when the latter is passed through a function. * If and are two independent random variables, then knowing the value of doesn't influence our knowledge of the value of (since the two don't influence each other by independence): \Eta(X, Y)=\Eta(X). * More generally, for any random variables and , we have \Eta(X, Y)\leq \Eta(X). * The entropy of two simultaneous events is no more than the sum of the entropies of each individual event i.e., \Eta(X,Y)\leq \Eta(X)+\Eta(Y), with equality if and only if the two events are independent. * The entropy \Eta(p) is concave in the probability mass function p, i.e. \Eta(\lambda p_1 + (1-\lambda) p_2) \ge \lambda \Eta(p_1) + (1-\lambda) \Eta(p_2) for all probability mass functions p_1,p_2 and 0 \le \lambda \le 1. ** Accordingly, the negative entropy (negentropy) function is convex, and its convex conjugate is LogSumExp.


Aspects


Relationship to thermodynamic entropy

The inspiration for adopting the word ''entropy'' in information theory came from the close resemblance between Shannon's formula and very similar known formulae from
statistical mechanics In physics, statistical mechanics is a mathematical framework that applies statistical methods and probability theory to large assemblies of microscopic entities. Sometimes called statistical physics or statistical thermodynamics, its applicati ...
. In statistical thermodynamics the most general formula for the thermodynamic
entropy Entropy is a scientific concept, most commonly associated with states of disorder, randomness, or uncertainty. The term and the concept are used in diverse fields, from classical thermodynamics, where it was first recognized, to the micros ...
of a
thermodynamic system A thermodynamic system is a body of matter and/or radiation separate from its surroundings that can be studied using the laws of thermodynamics. Thermodynamic systems can be passive and active according to internal processes. According to inter ...
is the Gibbs entropy S = - k_\text \sum_i p_i \ln p_i \,, where is the Boltzmann constant, and is the probability of a microstate. The Gibbs entropy was defined by J. Willard Gibbs in 1878 after earlier work by
Ludwig Boltzmann Ludwig Eduard Boltzmann ( ; ; 20 February 1844 – 5 September 1906) was an Austrian mathematician and Theoretical physics, theoretical physicist. His greatest achievements were the development of statistical mechanics and the statistical ex ...
(1872). The Gibbs entropy translates over almost unchanged into the world of
quantum physics Quantum mechanics is the fundamental physical Scientific theory, theory that describes the behavior of matter and of light; its unusual characteristics typically occur at and below the scale of atoms. Reprinted, Addison-Wesley, 1989, It is ...
to give the von Neumann entropy introduced by
John von Neumann John von Neumann ( ; ; December 28, 1903 – February 8, 1957) was a Hungarian and American mathematician, physicist, computer scientist and engineer. Von Neumann had perhaps the widest coverage of any mathematician of his time, in ...
in 1927: S = - k_\text \,(\rho \ln \rho) \,, where ρ is the
density matrix In quantum mechanics, a density matrix (or density operator) is a matrix used in calculating the probabilities of the outcomes of measurements performed on physical systems. It is a generalization of the state vectors or wavefunctions: while th ...
of the quantum mechanical system and Tr is the trace. At an everyday practical level, the links between information entropy and thermodynamic entropy are not evident. Physicists and chemists are apt to be more interested in ''changes'' in entropy as a system spontaneously evolves away from its initial conditions, in accordance with the
second law of thermodynamics The second law of thermodynamics is a physical law based on Universal (metaphysics), universal empirical observation concerning heat and Energy transformation, energy interconversions. A simple statement of the law is that heat always flows spont ...
, rather than an unchanging probability distribution. As the minuteness of the Boltzmann constant indicates, the changes in for even tiny amounts of substances in chemical and physical processes represent amounts of entropy that are extremely large compared to anything in
data compression In information theory, data compression, source coding, or bit-rate reduction is the process of encoding information using fewer bits than the original representation. Any particular compression is either lossy or lossless. Lossless compressi ...
or
signal processing Signal processing is an electrical engineering subfield that focuses on analyzing, modifying and synthesizing ''signals'', such as audio signal processing, sound, image processing, images, Scalar potential, potential fields, Seismic tomograph ...
. In classical thermodynamics, entropy is defined in terms of macroscopic measurements and makes no reference to any probability distribution, which is central to the definition of information entropy. The connection between thermodynamics and what is now known as information theory was first made by Boltzmann and expressed by his
equation In mathematics, an equation is a mathematical formula that expresses the equality of two expressions, by connecting them with the equals sign . The word ''equation'' and its cognates in other languages may have subtly different meanings; for ...
: S=k_\text \ln W, where S is the thermodynamic entropy of a particular macrostate (defined by thermodynamic parameters such as temperature, volume, energy, etc.), is the number of microstates (various combinations of particles in various energy states) that can yield the given macrostate, and is the Boltzmann constant. It is assumed that each microstate is equally likely, so that the probability of a given microstate is . When these probabilities are substituted into the above expression for the Gibbs entropy (or equivalently times the Shannon entropy), Boltzmann's equation results. In information theoretic terms, the information entropy of a system is the amount of "missing" information needed to determine a microstate, given the macrostate. In the view of Jaynes (1957), thermodynamic entropy, as explained by
statistical mechanics In physics, statistical mechanics is a mathematical framework that applies statistical methods and probability theory to large assemblies of microscopic entities. Sometimes called statistical physics or statistical thermodynamics, its applicati ...
, should be seen as an ''application'' of Shannon's information theory: the thermodynamic entropy is interpreted as being proportional to the amount of further Shannon information needed to define the detailed microscopic state of the system, that remains uncommunicated by a description solely in terms of the macroscopic variables of classical thermodynamics, with the constant of proportionality being just the Boltzmann constant. Adding heat to a system increases its thermodynamic entropy because it increases the number of possible microscopic states of the system that are consistent with the measurable values of its macroscopic variables, making any complete state description longer. (See article: '' maximum entropy thermodynamics''). Maxwell's demon can (hypothetically) reduce the thermodynamic entropy of a system by using information about the states of individual molecules; but, as Landauer (from 1961) and co-workers have shown, to function the demon himself must increase thermodynamic entropy in the process, by at least the amount of Shannon information he proposes to first acquire and store; and so the total thermodynamic entropy does not decrease (which resolves the paradox). Landauer's principle imposes a lower bound on the amount of heat a computer must generate to process a given amount of information, though modern computers are far less efficient.


Data compression

Shannon's definition of entropy, when applied to an information source, can determine the minimum channel capacity required to reliably transmit the source as encoded binary digits. Shannon's entropy measures the information contained in a message as opposed to the portion of the message that is determined (or predictable). Examples of the latter include redundancy in language structure or statistical properties relating to the occurrence frequencies of letter or word pairs, triplets etc. The minimum channel capacity can be realized in theory by using the typical set or in practice using Huffman, Lempel–Ziv or
arithmetic coding Arithmetic coding (AC) is a form of entropy encoding used in lossless data compression. Normally, a String (computer science), string of characters is represented using a fixed number of bits per character, as in the American Standard Code for In ...
. (See also Kolmogorov complexity.) In practice, compression algorithms deliberately include some judicious redundancy in the form of checksums to protect against errors. The entropy rate of a data source is the average number of bits per symbol needed to encode it. Shannon's experiments with human predictors show an information rate between 0.6 and 1.3 bits per character in English; the PPM compression algorithm can achieve a compression ratio of 1.5 bits per character in English text. If a compression scheme is lossless – one in which you can always recover the entire original message by decompression – then a compressed message has the same quantity of information as the original but is communicated in fewer characters. It has more information (higher entropy) per character. A compressed message has less redundancy. Shannon's source coding theorem states a lossless compression scheme cannot compress messages, on average, to have ''more'' than one bit of information per bit of message, but that any value ''less'' than one bit of information per bit of message can be attained by employing a suitable coding scheme. The entropy of a message per bit multiplied by the length of that message is a measure of how much total information the message contains. Shannon's theorem also implies that no lossless compression scheme can shorten ''all'' messages. If some messages come out shorter, at least one must come out longer due to the pigeonhole principle. In practical use, this is generally not a problem, because one is usually only interested in compressing certain types of messages, such as a document in English, as opposed to gibberish text, or digital photographs rather than noise, and it is unimportant if a compression algorithm makes some unlikely or uninteresting sequences larger. A 2011 study in ''
Science Science is a systematic discipline that builds and organises knowledge in the form of testable hypotheses and predictions about the universe. Modern science is typically divided into twoor threemajor branches: the natural sciences, which stu ...
'' estimates the world's technological capacity to store and communicate optimally compressed information normalized on the most effective compression algorithms available in the year 2007, therefore estimating the entropy of the technologically available sources."The World's Technological Capacity to Store, Communicate, and Compute Information"
, Martin Hilbert and Priscila López (2011), ''
Science Science is a systematic discipline that builds and organises knowledge in the form of testable hypotheses and predictions about the universe. Modern science is typically divided into twoor threemajor branches: the natural sciences, which stu ...
'', 332(6025); free access to the article through here: martinhilbert.net/WorldInfoCapacity.html
The authors estimate humankind technological capacity to store information (fully entropically compressed) in 1986 and again in 2007. They break the information into three categories—to store information on a medium, to receive information through one-way
broadcast Broadcasting is the data distribution, distribution of sound, audio audiovisual content to dispersed audiences via a electronic medium (communication), mass communications medium, typically one using the electromagnetic spectrum (radio waves), ...
networks, or to exchange information through two-way
telecommunications network A telecommunications network is a group of Node (networking), nodes interconnected by telecommunications links that are used to exchange messages between the nodes. The links may use a variety of technologies based on the methodologies of circuit ...
s.


Entropy as a measure of diversity

Entropy is one of several ways to measure biodiversity and is applied in the form of the Shannon index. A diversity index is a quantitative statistical measure of how many different types exist in a dataset, such as species in a community, accounting for ecological richness, evenness, and dominance. Specifically, Shannon entropy is the logarithm of , the true diversity index with parameter equal to 1. The Shannon index is related to the proportional abundances of types.


Entropy of a sequence

There are a number of entropy-related concepts that mathematically quantify information content of a sequence or message: * the self-information of an individual message or symbol taken from a given probability distribution (message or sequence seen as an individual event), * the joint entropy of the symbols forming the message or sequence (seen as a set of events), * the entropy rate of a
stochastic process In probability theory and related fields, a stochastic () or random process is a mathematical object usually defined as a family of random variables in a probability space, where the index of the family often has the interpretation of time. Sto ...
(message or sequence is seen as a succession of events). (The "rate of self-information" can also be defined for a particular sequence of messages or symbols generated by a given stochastic process: this will always be equal to the entropy rate in the case of a stationary process.) Other quantities of information are also used to compare or relate different sources of information. It is important not to confuse the above concepts. Often it is only clear from context which one is meant. For example, when someone says that the "entropy" of the English language is about 1 bit per character, they are actually modeling the English language as a stochastic process and talking about its entropy ''rate''. Shannon himself used the term in this way. If very large blocks are used, the estimate of per-character entropy rate may become artificially low because the probability distribution of the sequence is not known exactly; it is only an estimate. If one considers the text of every book ever published as a sequence, with each symbol being the text of a complete book, and if there are published books, and each book is only published once, the estimate of the probability of each book is , and the entropy (in bits) is . As a practical code, this corresponds to assigning each book a unique identifier and using it in place of the text of the book whenever one wants to refer to the book. This is enormously useful for talking about books, but it is not so useful for characterizing the information content of an individual book, or of language in general: it is not possible to reconstruct the book from its identifier without knowing the probability distribution, that is, the complete text of all the books. The key idea is that the complexity of the probabilistic model must be considered. Kolmogorov complexity is a theoretical generalization of this idea that allows the consideration of the information content of a sequence independent of any particular probability model; it considers the shortest program for a universal computer that outputs the sequence. A code that achieves the entropy rate of a sequence for a given model, plus the codebook (i.e. the probabilistic model), is one such program, but it may not be the shortest. The Fibonacci sequence is 1, 1, 2, 3, 5, 8, 13, .... treating the sequence as a message and each number as a symbol, there are almost as many symbols as there are characters in the message, giving an entropy of approximately . The first 128 symbols of the Fibonacci sequence has an entropy of approximately 7 bits/symbol, but the sequence can be expressed using a formula for , , and this formula has a much lower entropy and applies to any length of the Fibonacci sequence.


Limitations of entropy in cryptography

In
cryptanalysis Cryptanalysis (from the Greek ''kryptós'', "hidden", and ''analýein'', "to analyze") refers to the process of analyzing information systems in order to understand hidden aspects of the systems. Cryptanalysis is used to breach cryptographic se ...
, entropy is often roughly used as a measure of the unpredictability of a cryptographic key, though its real
uncertainty Uncertainty or incertitude refers to situations involving imperfect or unknown information. It applies to predictions of future events, to physical measurements that are already made, or to the unknown, and is particularly relevant for decision ...
is unmeasurable. For example, a 128-bit key that is uniformly and randomly generated has 128 bits of entropy. It also takes (on average) 2^ guesses to break by brute force. Entropy fails to capture the number of guesses required if the possible keys are not chosen uniformly. Instead, a measure called ''guesswork'' can be used to measure the effort required for a brute force attack. Other problems may arise from non-uniform distributions used in cryptography. For example, a 1,000,000-digit binary one-time pad using exclusive or. If the pad has 1,000,000 bits of entropy, it is perfect. If the pad has 999,999 bits of entropy, evenly distributed (each individual bit of the pad having 0.999999 bits of entropy) it may provide good security. But if the pad has 999,999 bits of entropy, where the first bit is fixed and the remaining 999,999 bits are perfectly random, the first bit of the ciphertext will not be encrypted at all.


Data as a Markov process

A common way to define entropy for text is based on the
Markov model In probability theory, a Markov model is a stochastic model used to Mathematical model, model pseudo-randomly changing systems. It is assumed that future states depend only on the current state, not on the events that occurred before it (that is, ...
of text. For an order-0 source (each character is selected independent of the last characters), the binary entropy is: \Eta(\mathcal) = - \sum_i p_i \log p_i , where is the probability of . For a first-order Markov source (one in which the probability of selecting a character is dependent only on the immediately preceding character), the entropy rate is: \Eta(\mathcal) = - \sum_i p_i \sum_j \ p_i (j) \log p_i (j) , where is a state (certain preceding characters) and p_i(j) is the probability of given as the previous character. For a second order Markov source, the entropy rate is \Eta(\mathcal) = -\sum_i p_i \sum_j p_i(j) \sum_k p_(k)\ \log p_(k) .


Efficiency (normalized entropy)

A source set \mathcal with a non-uniform distribution will have less entropy than the same set with a uniform distribution (i.e. the "optimized alphabet"). This deficiency in entropy can be expressed as a ratio called efficiency: \eta(X) = \frac = -\sum_^n \frac. Applying the basic properties of the logarithm, this quantity can also be expressed as: \begin \eta(X) &= -\sum_^n \frac = \sum_^n \frac \\ ex&= \sum_^n \log_n\left(p(x_i)^\right) = \log_n \left(\prod_^n p(x_i)^\right). \end Efficiency has utility in quantifying the effective use of a
communication channel A communication channel refers either to a physical transmission medium such as a wire, or to a logical connection over a multiplexed medium such as a radio channel in telecommunications and computer networking. A channel is used for infor ...
. This formulation is also referred to as the normalized entropy, as the entropy is divided by the maximum entropy . Furthermore, the efficiency is indifferent to the choice of (positive) base , as indicated by the insensitivity within the final logarithm above thereto.


Entropy for continuous random variables


Differential entropy

The Shannon entropy is restricted to random variables taking discrete values. The corresponding formula for a continuous random variable with probability density function with finite or infinite support \mathbb X on the real line is defined by analogy, using the above form of the entropy as an expectation: \Eta(X) = \mathbb \log f(X)= -\int_\mathbb X f(x) \log f(x)\, \mathrmx. This is the differential entropy (or continuous entropy). A precursor of the continuous entropy is the expression for the functional in the H-theorem of Boltzmann. Although the analogy between both functions is suggestive, the following question must be set: is the differential entropy a valid extension of the Shannon discrete entropy? Differential entropy lacks a number of properties that the Shannon discrete entropy has – it can even be negative – and corrections have been suggested, notably limiting density of discrete points. To answer this question, a connection must be established between the two functions: In order to obtain a generally finite measure as the bin size goes to zero. In the discrete case, the bin size is the (implicit) width of each of the (finite or infinite) bins whose probabilities are denoted by . As the continuous domain is generalized, the width must be made explicit. To do this, start with a continuous function discretized into bins of size \Delta. By the mean-value theorem there exists a value in each bin such that f(x_i) \Delta = \int_^ f(x)\, dx the integral of the function can be approximated (in the Riemannian sense) by \int_^ f(x)\, dx = \lim_ \sum_^ f(x_i) \Delta , where this limit and "bin size goes to zero" are equivalent. We will denote \Eta^ := - \sum_^ f(x_i) \Delta \log \left( f(x_i) \Delta \right) and expanding the logarithm, we have \Eta^ = - \sum_^ f(x_i) \Delta \log (f(x_i)) -\sum_^ f(x_i) \Delta \log (\Delta). As , we have \begin \sum_^ f(x_i) \Delta &\to \int_^ f(x)\, dx = 1 \\ \sum_^ f(x_i) \Delta \log (f(x_i)) &\to \int_^ f(x) \log f(x)\, dx. \end Note; as , requires a special definition of the differential or continuous entropy: h = \lim_ \left(\Eta^ + \log \Delta\right) = -\int_^ f(x) \log f(x)\,dx, which is, as said before, referred to as the differential entropy. This means that the differential entropy ''is not'' a limit of the Shannon entropy for . Rather, it differs from the limit of the Shannon entropy by an infinite offset (see also the article on information dimension).


Limiting density of discrete points

It turns out as a result that, unlike the Shannon entropy, the differential entropy is ''not'' in general a good measure of uncertainty or information. For example, the differential entropy can be negative; also it is not invariant under continuous co-ordinate transformations. This problem may be illustrated by a change of units when is a dimensioned variable. will then have the units of . The argument of the logarithm must be dimensionless, otherwise it is improper, so that the differential entropy as given above will be improper. If is some "standard" value of (i.e. "bin size") and therefore has the same units, then a modified differential entropy may be written in proper form as: \Eta=\int_^\infty f(x) \log(f(x)\,\Delta)\,dx , and the result will be the same for any choice of units for . In fact, the limit of discrete entropy as N \rightarrow \infty would also include a term of \log(N), which would in general be infinite. This is expected: continuous variables would typically have infinite entropy when discretized. The limiting density of discrete points is really a measure of how much easier a distribution is to describe than a distribution that is uniform over its quantization scheme.


Relative entropy

Another useful measure of entropy that works equally well in the discrete and the continuous case is the relative entropy of a distribution. It is defined as the Kullback–Leibler divergence from the distribution to a reference measure as follows. Assume that a probability distribution is absolutely continuous with respect to a measure , i.e. is of the form for some non-negative -integrable function with -integral 1, then the relative entropy can be defined as D_(p \, m ) = \int \log (f(x)) p(dx) = \int f(x)\log (f(x)) m(dx) . In this form the relative entropy generalizes (up to change in sign) both the discrete entropy, where the measure is the counting measure, and the differential entropy, where the measure is the
Lebesgue measure In measure theory, a branch of mathematics, the Lebesgue measure, named after French mathematician Henri Lebesgue, is the standard way of assigning a measure to subsets of higher dimensional Euclidean '-spaces. For lower dimensions or , it c ...
. If the measure is itself a probability distribution, the relative entropy is non-negative, and zero if as measures. It is defined for any measure space, hence coordinate independent and invariant under co-ordinate reparameterizations if one properly takes into account the transformation of the measure . The relative entropy, and (implicitly) entropy and differential entropy, do depend on the "reference" measure .


Use in number theory

Terence Tao used entropy to make a useful connection trying to solve the Erdős discrepancy problem. Intuitively the idea behind the proof was if there is low information in terms of the Shannon entropy between consecutive random variables (here the random variable is defined using the Liouville function (which is a useful mathematical function for studying distribution of primes) \lambda(n+H). And in an interval , n+Hthe sum over that interval could become arbitrary large. For example, a sequence of +1's (which are values of could take) have trivially low entropy and their sum would become big. But the key insight was showing a reduction in entropy by non negligible amounts as one expands H leading inturn to unbounded growth of a mathematical object over this random variable is equivalent to showing the unbounded growth per the Erdős discrepancy problem. The proof is quite involved and it brought together breakthroughs not just in novel use of Shannon entropy, but also it used the Liouville function along with averages of modulated multiplicative functions in short intervals. Proving it also broke the "parity barrier" for this specific problem. While the use of Shannon entropy in the proof is novel it is likely to open new research in this direction.


Use in combinatorics

Entropy has become a useful quantity in
combinatorics Combinatorics is an area of mathematics primarily concerned with counting, both as a means and as an end to obtaining results, and certain properties of finite structures. It is closely related to many other areas of mathematics and has many ...
.


Loomis–Whitney inequality

A simple example of this is an alternative proof of the Loomis–Whitney inequality: for every subset , we have , A, ^\leq \prod_^ , P_(A), where is the orthogonal projection in the th coordinate: P_(A)=\. The proof follows as a simple corollary of Shearer's inequality: if are random variables and are subsets of such that every integer between 1 and lies in exactly of these subsets, then \Eta X_, \ldots ,X_)leq \frac\sum_^\Eta X_)_/math> where (X_)_ is the Cartesian product of random variables with indexes in (so the dimension of this vector is equal to the size of ). We sketch how Loomis–Whitney follows from this: Indeed, let be a uniformly distributed random variable with values in and so that each point in occurs with equal probability. Then (by the further properties of entropy mentioned above) , where denotes the cardinality of . Let . The range of (X_)_ is contained in and hence \Eta X_)_leq \log , P_(A), . Now use this to bound the right side of Shearer's inequality and exponentiate the opposite sides of the resulting inequality you obtain.


Approximation to binomial coefficient

For integers let . Then \frac \leq \tbinom nk \leq 2^, where \Eta(q) = -q \log_2(q) - (1-q) \log_2(1-q). A nice interpretation of this is that the number of binary strings of length with exactly many 1's is approximately 2^.


Use in machine learning

Machine learning Machine learning (ML) is a field of study in artificial intelligence concerned with the development and study of Computational statistics, statistical algorithms that can learn from data and generalise to unseen data, and thus perform Task ( ...
techniques arise largely from statistics and also information theory. In general, entropy is a measure of uncertainty and the objective of machine learning is to minimize uncertainty.
Decision tree learning Decision tree learning is a supervised learning approach used in statistics, data mining and machine learning. In this formalism, a classification or regression decision tree is used as a predictive model to draw conclusions about a set of obser ...
algorithms use relative entropy to determine the decision rules that govern the data at each node. The information gain in decision trees IG(Y,X), which is equal to the difference between the entropy of Y and the conditional entropy of Y given X, quantifies the expected information, or the reduction in entropy, from additionally knowing the value of an attribute X. The information gain is used to identify which attributes of the dataset provide the most information and should be used to split the nodes of the tree optimally. Bayesian inference models often apply the principle of maximum entropy to obtain
prior probability A prior probability distribution of an uncertain quantity, simply called the prior, is its assumed probability distribution before some evidence is taken into account. For example, the prior could be the probability distribution representing the ...
distributions. The idea is that the distribution that best represents the current state of knowledge of a system is the one with the largest entropy, and is therefore suitable to be the prior. Classification in machine learning performed by
logistic regression In statistics, a logistic model (or logit model) is a statistical model that models the logit, log-odds of an event as a linear function (calculus), linear combination of one or more independent variables. In regression analysis, logistic regres ...
or
artificial neural network In machine learning, a neural network (also artificial neural network or neural net, abbreviated ANN or NN) is a computational model inspired by the structure and functions of biological neural networks. A neural network consists of connected ...
s often employs a standard loss function, called cross-entropy loss, that minimizes the average cross entropy between ground truth and predicted distributions. In general, cross entropy is a measure of the differences between two datasets similar to the KL divergence (also known as relative entropy).


See also

* Approximate entropy (ApEn) * Entropy (thermodynamics) * Cross entropy – is a measure of the average number of bits needed to identify an event from a set of possibilities between two probability distributions * Entropy (arrow of time) *
Entropy encoding In information theory, an entropy coding (or entropy encoding) is any lossless data compression method that attempts to approach the lower bound declared by Shannon's source coding theorem, which states that any lossless data compression method ...
– a coding scheme that assigns codes to symbols so as to match code lengths with the probabilities of the symbols. * Entropy estimation * Entropy power inequality * Fisher information * Graph entropy *
Hamming distance In information theory, the Hamming distance between two String (computer science), strings or vectors of equal length is the number of positions at which the corresponding symbols are different. In other words, it measures the minimum number ...
* History of entropy * History of information theory * Information fluctuation complexity *
Information geometry Information geometry is an interdisciplinary field that applies the techniques of differential geometry to study probability theory and statistics. It studies statistical manifolds, which are Riemannian manifolds whose points correspond to proba ...
* Kolmogorov–Sinai entropy in
dynamical system In mathematics, a dynamical system is a system in which a Function (mathematics), function describes the time dependence of a Point (geometry), point in an ambient space, such as in a parametric curve. Examples include the mathematical models ...
s * Levenshtein distance * Mutual information * Perplexity * Qualitative variation – other measures of statistical dispersion for nominal distributions * Quantum relative entropy – a measure of distinguishability between two quantum states. *
Rényi entropy In information theory, the Rényi entropy is a quantity that generalizes various notions of Entropy (information theory), entropy, including Hartley entropy, Shannon entropy, collision entropy, and min-entropy. The Rényi entropy is named after Alf ...
– a generalization of Shannon entropy; it is one of a family of functionals for quantifying the diversity, uncertainty or randomness of a system. *
Randomness In common usage, randomness is the apparent or actual lack of definite pattern or predictability in information. A random sequence of events, symbols or steps often has no order and does not follow an intelligible pattern or combination. ...
* Sample entropy (SampEn) * Shannon index * Theil index * Typoglycemia


Notes


References


Further reading


Textbooks on information theory

* Cover, T.M., Thomas, J.A. (2006), ''Elements of Information Theory – 2nd Ed.'', Wiley-Interscience, * MacKay, D.J.C. (2003), ''Information Theory, Inference and Learning Algorithms'', Cambridge University Press, * Arndt, C. (2004), ''Information Measures: Information and its Description in Science and Engineering'', Springer, * Gray, R. M. (2011), ''Entropy and Information Theory'', Springer. * * Shannon, C.E., Weaver, W. (1949) ''The Mathematical Theory of Communication'', Univ of Illinois Press. * Stone, J. V. (2014), Chapter 1 o
''Information Theory: A Tutorial Introduction''
, University of Sheffield, England. .


External links

*
"Entropy"
at Rosetta Code—repository of implementations of Shannon entropy in different programming languages. *
Entropy
'' an interdisciplinary journal on all aspects of the entropy concept. Open access. {{Authority control Information theory Statistical randomness Complex systems theory Data compression