HOME

TheInfoList



OR:

In
information theory Information theory is the mathematical study of the quantification (science), quantification, Data storage, storage, and telecommunications, communication of information. The field was established and formalized by Claude Shannon in the 1940s, ...
, the Rényi entropy is a quantity that generalizes various notions of
entropy Entropy is a scientific concept, most commonly associated with states of disorder, randomness, or uncertainty. The term and the concept are used in diverse fields, from classical thermodynamics, where it was first recognized, to the micros ...
, including Hartley entropy,
Shannon entropy Shannon may refer to: People * Shannon (given name) * Shannon (surname) * Shannon (American singer), stage name of singer Brenda Shannon Greene (born 1958) * Shannon (South Korean singer), British-South Korean singer and actress Shannon Arrum ...
, collision entropy, and
min-entropy The min-entropy, in information theory, is the smallest of the Rényi family of entropies, corresponding to the most conservative way of measuring the unpredictability of a set of outcomes, as the negative logarithm of the probability of the ''mo ...
. The Rényi entropy is named after
Alfréd Rényi Alfréd Rényi (20 March 1921 – 1 February 1970) was a Hungarian mathematician known for his work in probability theory, though he also made contributions in combinatorics, graph theory, and number theory. Life Rényi was born in Budapest to A ...
, who looked for the most general way to quantify information while preserving additivity for independent events. In the context of
fractal dimension In mathematics, a fractal dimension is a term invoked in the science of geometry to provide a rational statistical index of complexity detail in a pattern. A fractal pattern changes with the Scaling (geometry), scale at which it is measured. It ...
estimation, the Rényi entropy forms the basis of the concept of generalized dimensions. The Rényi entropy is important in ecology and statistics as index of diversity. The Rényi entropy is also important in
quantum information Quantum information is the information of the state of a quantum system. It is the basic entity of study in quantum information theory, and can be manipulated using quantum information processing techniques. Quantum information refers to both t ...
, where it can be used as a measure of entanglement. In the Heisenberg XY spin chain model, the Rényi entropy as a function of can be calculated explicitly because it is an automorphic function with respect to a particular subgroup of the
modular group In mathematics, the modular group is the projective special linear group \operatorname(2,\mathbb Z) of 2\times 2 matrices with integer coefficients and determinant 1, such that the matrices A and -A are identified. The modular group acts on ...
. In
theoretical computer science Theoretical computer science is a subfield of computer science and mathematics that focuses on the Abstraction, abstract and mathematical foundations of computation. It is difficult to circumscribe the theoretical areas precisely. The Associati ...
, the min-entropy is used in the context of
randomness extractor A randomness extractor, often simply called an "extractor", is a function, which being applied to output from a weak entropy source, together with a short, uniformly random seed, generates a highly random output that appears Independent and identic ...
s.


Definition

The Rényi entropy of order , where 0 < \alpha < \infin and , is defined as \Eta_\alpha(X) = \frac \log\left(\sum_^n p_i^\alpha\right) . It is further defined at \alpha = 0, 1, \infin as \Eta_\alpha(X) = \lim_ \Eta_\gamma(X) . Here, X is a
discrete random variable A random variable (also called random quantity, aleatory variable, or stochastic variable) is a mathematical formalization of a quantity or object which depends on random events. The term 'random variable' in its mathematical definition refers ...
with possible outcomes in the set \mathcal = \ and corresponding probabilities p_i \doteq \Pr(X=x_i) for . The resulting
unit of information A unit of information is any unit of measure of digital data size. In digital computing, a unit of information is used to describe the capacity of a digital data storage device. In telecommunications, a unit of information is used to describe the ...
is determined by the base of the
logarithm In mathematics, the logarithm of a number is the exponent by which another fixed value, the base, must be raised to produce that number. For example, the logarithm of to base is , because is to the rd power: . More generally, if , the ...
, e.g. shannon for base 2, or nat for base ''e''. If the probabilities are p_i=1/n for all , then all the Rényi entropies of the distribution are equal: . In general, for all discrete random variables , \Eta_\alpha(X) is a non-increasing function in . Applications often exploit the following relation between the Rényi entropy and the ''α''-norm of the vector of probabilities: \Eta_\alpha(X)=\frac \log \left(_\alpha\right) . Here, the discrete probability distribution P = (p_1, \dots, p_n) is interpreted as a vector in \R^n with p_i\geq 0 and \sum_^ p_i = 1. The Rényi entropy for any \alpha \geq 0 is Schur concave. Proven by the Schur–Ostrowski criterion.


Special cases

As \alpha approaches zero, the Rényi entropy increasingly weighs all events with nonzero probability more equally, regardless of their probabilities. In the limit for , the Rényi entropy is just the logarithm of the size of the support of . The limit for \alpha \to 1 is the
Shannon entropy Shannon may refer to: People * Shannon (given name) * Shannon (surname) * Shannon (American singer), stage name of singer Brenda Shannon Greene (born 1958) * Shannon (South Korean singer), British-South Korean singer and actress Shannon Arrum ...
. As \alpha approaches infinity, the Rényi entropy is increasingly determined by the events of highest probability.


Hartley or max-entropy

\Eta_0(X) is \log n where n is the number of non-zero probabilities. If the probabilities are all nonzero, it is simply the logarithm of the
cardinality The thumb is the first digit of the hand, next to the index finger. When a person is standing in the medical anatomical position (where the palm is facing to the front), the thumb is the outermost digit. The Medical Latin English noun for thum ...
of the alphabet () of , sometimes called the Hartley entropy of , \Eta_0 (X) = \log n = \log , \mathcal, \,


Shannon entropy

The limiting value of \Eta_\alpha as \alpha \to 1 is the
Shannon entropy Shannon may refer to: People * Shannon (given name) * Shannon (surname) * Shannon (American singer), stage name of singer Brenda Shannon Greene (born 1958) * Shannon (South Korean singer), British-South Korean singer and actress Shannon Arrum ...
: \Eta_1 (X) \equiv \lim_ \Eta_ (X) = - \sum_^n p_i \log p_i


Collision entropy

Collision entropy, sometimes just called "Rényi entropy", refers to the case , \Eta_2 (X) = - \log \sum_^n p_i^2 = - \log P(X = Y) , where X and Y are
independent and identically distributed Independent or Independents may refer to: Arts, entertainment, and media Artist groups * Independents (artist group), a group of modernist painters based in Pennsylvania, United States * Independentes (English: Independents), a Portuguese artist ...
. The collision entropy is related to the
index of coincidence In cryptography, coincidence counting is the technique (invented by William F. Friedman) of putting two texts side-by-side and counting the number of times that identical letters appear in the same position in both texts. This count, either as a r ...
. It is the negative logarithm of the Simpson diversity index.


Min-entropy

In the limit as , the Rényi entropy \Eta_\alpha converges to the min-entropy : \Eta_\infty(X) \doteq \min_i (-\log p_i) = -(\max_i \log p_i) = -\log \max_i p_i\,. Equivalently, the min-entropy \Eta_\infty(X) is the largest real number such that all events occur with probability at most . The name ''min-entropy'' stems from the fact that it is the smallest entropy measure in the family of Rényi entropies. In this sense, it is the strongest way to measure the information content of a discrete random variable. In particular, the min-entropy is never larger than the
Shannon entropy Shannon may refer to: People * Shannon (given name) * Shannon (surname) * Shannon (American singer), stage name of singer Brenda Shannon Greene (born 1958) * Shannon (South Korean singer), British-South Korean singer and actress Shannon Arrum ...
. The min-entropy has important applications for
randomness extractor A randomness extractor, often simply called an "extractor", is a function, which being applied to output from a weak entropy source, together with a short, uniformly random seed, generates a highly random output that appears Independent and identic ...
s in
theoretical computer science Theoretical computer science is a subfield of computer science and mathematics that focuses on the Abstraction, abstract and mathematical foundations of computation. It is difficult to circumscribe the theoretical areas precisely. The Associati ...
: Extractors are able to extract randomness from random sources that have a large min-entropy; merely having a large
Shannon entropy Shannon may refer to: People * Shannon (given name) * Shannon (surname) * Shannon (American singer), stage name of singer Brenda Shannon Greene (born 1958) * Shannon (South Korean singer), British-South Korean singer and actress Shannon Arrum ...
does not suffice for this task.


Inequalities for different orders ''α''

That \Eta_\alpha is non-increasing in \alpha for any given distribution of probabilities , which can be proven by differentiation, as -\frac = \frac \sum_^n z_i \log(z_i / p_i) = \frac D_(z\, p) which is proportional to
Kullback–Leibler divergence In mathematical statistics, the Kullback–Leibler (KL) divergence (also called relative entropy and I-divergence), denoted D_\text(P \parallel Q), is a type of statistical distance: a measure of how much a model probability distribution is diff ...
(which is always non-negative), where z_i = p_i^\alpha / \sum_^n p_j^\alpha. In particular, it is strictly positive except when the distribution is uniform. At the \alpha \to 1 limit, we have -\frac \to \frac \sum_i p_i ^2. In particular cases inequalities can be proven also by
Jensen's inequality In mathematics, Jensen's inequality, named after the Danish mathematician Johan Jensen, relates the value of a convex function of an integral to the integral of the convex function. It was proved by Jensen in 1906, building on an earlier p ...
: \log n=\Eta_0\geq \Eta_1 \geq \Eta_2 \geq \Eta_\infty . For values of , inequalities in the other direction also hold. In particular, we have \Eta_2 \le 2 \Eta_\infty . On the other hand, the Shannon entropy \Eta_1 can be arbitrarily high for a random variable X that has a given min-entropy. An example of this is given by the sequence of random variables X_n \sim \ for n \geq 1 such that P(X_n = 0) = 1/2 and P(X_n = x) = 1/(2n) since \Eta_\infty(X_n) = \log 2 but .


Rényi divergence

As well as the absolute Rényi entropies, Rényi also defined a spectrum of divergence measures generalising the
Kullback–Leibler divergence In mathematical statistics, the Kullback–Leibler (KL) divergence (also called relative entropy and I-divergence), denoted D_\text(P \parallel Q), is a type of statistical distance: a measure of how much a model probability distribution is diff ...
. The Rényi divergence of order or alpha-divergence of a distribution from a distribution is defined to be \begin D_\alpha (P \Vert Q) &= \frac \log\left(\sum_^n \frac\right) \\ ex&= \frac\log \mathbb E_\left \right\, \end when and . We can define the Rényi divergence for the special values by taking a limit, and in particular the limit gives the Kullback–Leibler divergence. Some special cases: * : minus the
log probability In probability theory and computer science, a log probability is simply a logarithm of a probability. The use of log probabilities means representing probabilities on a logarithmic scale (-\infty, 0], instead of the standard , 1/math> unit interva ...
under that ; * : minus twice the logarithm of the Bhattacharyya coefficient; () * : the
Kullback–Leibler divergence In mathematical statistics, the Kullback–Leibler (KL) divergence (also called relative entropy and I-divergence), denoted D_\text(P \parallel Q), is a type of statistical distance: a measure of how much a model probability distribution is diff ...
; * : the log of the expected ratio of the probabilities; * : the log of the maximum ratio of the probabilities. The Rényi divergence is indeed a
divergence In vector calculus, divergence is a vector operator that operates on a vector field, producing a scalar field giving the rate that the vector field alters the volume in an infinitesimal neighborhood of each point. (In 2D this "volume" refers to ...
, meaning simply that D_\alpha (P \, Q) is greater than or equal to zero, and zero only when . For any fixed distributions and , the Rényi divergence is nondecreasing as a function of its order , and it is continuous on the set of for which it is finite, or for the sake of brevity, the information of order obtained if the distribution is replaced by the distribution .


Financial interpretation

A pair of probability distributions can be viewed as a game of chance in which one of the distributions defines official odds and the other contains the actual probabilities. Knowledge of the actual probabilities allows a player to profit from the game. The expected profit rate is connected to the Rényi divergence as follows = \frac\, D_1(b\, m) + \frac\, D_(b\, m) \,, where m is the distribution defining the official odds (i.e. the "market") for the game, b is the investor-believed distribution and R is the investor's risk aversion (the Arrow–Pratt relative risk aversion). If the true distribution is p (not necessarily coinciding with the investor's belief ), the long-term realized rate converges to the true expectation which has a similar mathematical structure = \frac\,\Big( D_1(p\, m) - D_1(p\, b) \Big) + \frac\, D_(b\, m) \,.


Properties specific to ''α'' = 1

The value , which gives the
Shannon entropy Shannon may refer to: People * Shannon (given name) * Shannon (surname) * Shannon (American singer), stage name of singer Brenda Shannon Greene (born 1958) * Shannon (South Korean singer), British-South Korean singer and actress Shannon Arrum ...
and the
Kullback–Leibler divergence In mathematical statistics, the Kullback–Leibler (KL) divergence (also called relative entropy and I-divergence), denoted D_\text(P \parallel Q), is a type of statistical distance: a measure of how much a model probability distribution is diff ...
, is the only value at which the chain rule of conditional probability holds exactly: \Eta(A,X) = \Eta(A) + \mathbb_ \big A=a) \big/math> for the absolute entropies, and D_\mathrm(p(x, a)p(a)\, m(x,a)) = D_\mathrm(p(a)\, m(a)) + \mathbb_\, for the relative entropies. The latter in particular means that if we seek a distribution which minimizes the divergence from some underlying prior measure , and we acquire new information which only affects the distribution of , then the distribution of remains , unchanged. The other Rényi divergences satisfy the criteria of being positive and continuous, being invariant under 1-to-1 co-ordinate transformations, and of combining additively when and are independent, so that if , then \Eta_\alpha(A,X) = \Eta_\alpha(A) + \Eta_\alpha(X)\; and D_\alpha(P(A)P(X)\, Q(A)Q(X)) = D_\alpha(P(A)\, Q(A)) + D_\alpha(P(X)\, Q(X)). The stronger properties of the \alpha = 1 quantities allow the definition of conditional information and
mutual information In probability theory and information theory, the mutual information (MI) of two random variables is a measure of the mutual Statistical dependence, dependence between the two variables. More specifically, it quantifies the "Information conten ...
from communication theory.


Exponential families

The Rényi entropies and divergences for an
exponential family In probability and statistics, an exponential family is a parametric set of probability distributions of a certain form, specified below. This special form is chosen for mathematical convenience, including the enabling of the user to calculate ...
admit simple expressions \Eta_\alpha(p_F(x;\theta)) = \frac \left(F(\alpha\theta)-\alpha F(\theta) + \log E_p\left ^\rightright) and D_\alpha(p:q) = \frac where J_(\theta:\theta')= \alpha F(\theta)+(1-\alpha) F(\theta')- F(\alpha\theta+(1-\alpha)\theta') is a Jensen difference divergence.


Physical meaning

The Rényi entropy in quantum physics is not considered to be an
observable In physics, an observable is a physical property or physical quantity that can be measured. In classical mechanics, an observable is a real-valued "function" on the set of all possible system states, e.g., position and momentum. In quantum ...
, due to its nonlinear dependence on the
density matrix In quantum mechanics, a density matrix (or density operator) is a matrix used in calculating the probabilities of the outcomes of measurements performed on physical systems. It is a generalization of the state vectors or wavefunctions: while th ...
. (This nonlinear dependence applies even in the special case of the Shannon entropy.) It can, however, be given an operational meaning through the two-time measurements (also known as full counting statistics) of energy transfers. The limit of the quantum mechanical Rényi entropy as \alpha\to 1 is the
von Neumann entropy In physics, the von Neumann entropy, named after John von Neumann, is a measure of the statistical uncertainty within a description of a quantum system. It extends the concept of Gibbs entropy from classical statistical mechanics to quantum statis ...
.


See also

* Diversity indices * Tsallis entropy *
Generalized entropy index The generalized entropy index has been proposed as a measure of income inequality in a population. It is derived from information theory as a measure of redundancy in data. In information theory a measure of redundancy can be interpreted as no ...


Notes


References

* * * * * * * * * * * * * * * * * * * * {{DEFAULTSORT:Renyi entropy Information theory Entropy and information