Bregman Divergence
   HOME

TheInfoList



OR:

In
mathematics Mathematics is a field of study that discovers and organizes methods, Mathematical theory, theories and theorems that are developed and Mathematical proof, proved for the needs of empirical sciences and mathematics itself. There are many ar ...
, specifically
statistics Statistics (from German language, German: ', "description of a State (polity), state, a country") is the discipline that concerns the collection, organization, analysis, interpretation, and presentation of data. In applying statistics to a s ...
and
information geometry Information geometry is an interdisciplinary field that applies the techniques of differential geometry to study probability theory and statistics. It studies statistical manifolds, which are Riemannian manifolds whose points correspond to proba ...
, a Bregman divergence or Bregman distance is a measure of difference between two points, defined in terms of a strictly
convex function In mathematics, a real-valued function is called convex if the line segment between any two distinct points on the graph of a function, graph of the function lies above or on the graph between the two points. Equivalently, a function is conve ...
; they form an important class of
divergence In vector calculus, divergence is a vector operator that operates on a vector field, producing a scalar field giving the rate that the vector field alters the volume in an infinitesimal neighborhood of each point. (In 2D this "volume" refers to ...
s. When the points are interpreted as
probability distribution In probability theory and statistics, a probability distribution is a Function (mathematics), function that gives the probabilities of occurrence of possible events for an Experiment (probability theory), experiment. It is a mathematical descri ...
s – notably as either values of the parameter of a
parametric model In statistics, a parametric model or parametric family or finite-dimensional model is a particular class of statistical models. Specifically, a parametric model is a family of probability distributions that has a finite number of parameters. Defi ...
or as a data set of observed values – the resulting distance is a statistical distance. The most basic Bregman divergence is the squared Euclidean distance. Bregman divergences are similar to metrics, but satisfy neither the
triangle inequality In mathematics, the triangle inequality states that for any triangle, the sum of the lengths of any two sides must be greater than or equal to the length of the remaining side. This statement permits the inclusion of Degeneracy (mathematics)#T ...
(ever) nor symmetry (in general). However, they satisfy a generalization of the
Pythagorean theorem In mathematics, the Pythagorean theorem or Pythagoras' theorem is a fundamental relation in Euclidean geometry between the three sides of a right triangle. It states that the area of the square whose side is the hypotenuse (the side opposite t ...
, and in
information geometry Information geometry is an interdisciplinary field that applies the techniques of differential geometry to study probability theory and statistics. It studies statistical manifolds, which are Riemannian manifolds whose points correspond to proba ...
the corresponding statistical manifold is interpreted as a (dually) flat manifold. This allows many techniques of optimization theory to be generalized to Bregman divergences, geometrically as generalizations of least squares. Bregman divergences are named after Russian mathematician Lev M. Bregman, who introduced the concept in 1967.


Definition

Let F\colon \Omega \to \mathbb be a continuously-differentiable, strictly
convex function In mathematics, a real-valued function is called convex if the line segment between any two distinct points on the graph of a function, graph of the function lies above or on the graph between the two points. Equivalently, a function is conve ...
defined on a
convex set In geometry, a set of points is convex if it contains every line segment between two points in the set. For example, a solid cube (geometry), cube is a convex set, but anything that is hollow or has an indent, for example, a crescent shape, is n ...
\Omega. The Bregman distance associated with ''F'' for points p, q \in \Omega is the difference between the value of ''F'' at point ''p'' and the value of the first-order Taylor expansion of ''F'' around point ''q'' evaluated at point ''p'': :D_F(p, q) = F(p)-F(q)-\langle \nabla F(q), p-q\rangle.


Properties

* Non-negativity: D_F(p, q) \ge 0 for all p, q. This is a consequence of the convexity of F. * Positivity: When F is strictly convex, D_F(p, q) = 0 iff p=q. * Uniqueness up to affine difference: D_F = D_G iff F-G is an affine function. * Convexity: D_F(p, q) is convex in its first argument, but not necessarily in the second argument. If F is strictly convex, then D_F(p, q) is strictly convex in its first argument. ** For example, Take f(x) = , x, , smooth it at 0, then take y = 1, x_1 = 0.1, x_2 = -0.9, x_3 = 0.9x_1 + 0.1x_2, then D_f(y, x_3) \approx 1 > 0.9 D_f(y, x_1) + 0.1 D_f(y, x_2) \approx 0.2. * Linearity: If we think of the Bregman distance as an operator on the function ''F'', then it is linear with respect to non-negative coefficients. In other words, for F_1, F_2 strictly convex and differentiable, and \lambda \ge 0, ::D_(p, q) = D_(p, q) + \lambda D_(p, q) * Duality: If F is strictly convex, then the function F has a
convex conjugate In mathematics and mathematical optimization, the convex conjugate of a function is a generalization of the Legendre transformation which applies to non-convex functions. It is also known as Legendre–Fenchel transformation, Fenchel transformati ...
F^* which is also strictly convex and continuously differentiable on some convex set \Omega^*. The Bregman distance defined with respect to F^* is dual to D_F(p, q) as ::D_(p^*, q^*) = D_F(q, p) :Here, p^* = \nabla F(p) and q^* = \nabla F(q) are the dual points corresponding to p and q. :Moreover, using the same notations : ::D_(p, q) = F(p) + F^*(q^*) - \langle p, q^* \rangle * Integral form: by the integral remainder form of Taylor's Theorem, a Bregman divergence can be written as the integral of the Hessian of F along the line segment between the Bregman divergence's arguments. * Mean as minimizer: A key result about Bregman divergences is that, given a random vector, the mean vector minimizes the expected Bregman divergence from the random vector. This result generalizes the textbook result that the mean of a set minimizes total squared error to elements in the set. This result was proved for the vector case by (Banerjee et al. 2005), and extended to the case of functions/distributions by (Frigyik et al. 2008). This result is important because it further justifies using a mean as a representative of a random set, particularly in Bayesian estimation. * Bregman balls are bounded, and compact if X is closed: Define Bregman ball centered at x with radius r by B_f(x, r):= \left\. When X\subset \R^n is finite dimensional, \forall x\in X, if x is in the relative interior of X, or if X is locally closed at x (that is, there exists a closed ball B(x, r) centered at x, such that B(x,r) \cap X is closed), then B_f(x, r) is bounded for all r . If X is closed, then B_f(x, r) is compact for all r. * Law of cosines: For any p,q,z ::D_F(p, q) = D_F(p, z) + D_F(z, q) - (p - z)^T(\nabla F(q) - \nabla F(z)) *
Parallelogram law In mathematics, the simplest form of the parallelogram law (also called the parallelogram identity) belongs to elementary geometry. It states that the sum of the squares of the lengths of the four sides of a parallelogram equals the sum of the s ...
: for any \theta, \theta_1, \theta_2, B_\left(\theta_: \theta\right)+B_\left(\theta_: \theta\right)=B_\left(\theta_: \frac\right)+B_\left(\theta_: \frac\right)+2 B_\left(\frac: \theta\right) * Bregman projection: For any W\subset \Omega, define the "Bregman projection" of q onto W: P_W(q) = \text_ D_F(\omega, q). Then ** if W is convex, then the projection is unique if it exists; ** if W is nonempty, closed, and convex and \Omega\subset \R^n is finite dimensional, then the projection exists and is unique. * Generalized Pythagorean Theorem: For any v\in \Omega, a\in W , D_F(a, v) \ge D_F(a, P_W(v)) + D_F(P_W(v), v). This is an equality if P_W(v) is in the relative interior of W. In particular, this always happens when W is an affine set. * ''Lack'' of triangle inequality: Since the Bregman divergence is essentially a generalization of squared Euclidean distance, there is no triangle inequality. Indeed, D_F(z, x) - D_F(z, y) - D_F(y, x) = \langle\nabla f(y) - \nabla f(x), z-y\rangle, which may be positive or negative.


Proofs

* Non-negativity and positivity: use Jensen's inequality. * Uniqueness up to affine difference: Fix some x\in \Omega, then for any other y\in \Omega, we have by definitionF(y) - G(y) = F(x) - G(x) + \langle\nabla F(x) - \nabla G(x) , y-x \rangle . * Convexity in the first argument: by definition, and use convexity of F. Same for strict convexity. * Linearity in F, law of cosines, parallelogram law: by definition. * Duality: See figure 1 of. * Bregman balls are bounded, and compact if X is closed: Fix x\in X . Take affine transform on f , so that \nabla f(x) = 0. Take some \epsilon > 0, such that \partial B(x, \epsilon) \subset X. Then consider the "radial-directional" derivative of f on the Euclidean sphere \partial B(x, \epsilon). \langle\nabla f(y), (y-x)\rangle for all y\in \partial B(x, \epsilon). Since \partial B(x, \epsilon)\subset \R^n is compact, it achieves minimal value \delta at some y_0\in \partial B(x, \epsilon). Since f is strictly convex, \delta > 0. Then B_f(x, r)\subset B(x, r/\delta)\cap X. Since D_f(y, x) is C^1 in y, D_f is continuous in y, thus B_f(x, r) is closed if X is. * Projection P_W is well-defined when W is closed and convex. Fix v\in X. Take some w\in W , then let r := D_f(w, v). Then draw the Bregman ball B_f(v, r)\cap W. It is closed and bounded, thus compact. Since D_f(\cdot, v) is continuous and strictly convex on it, and bounded below by 0, it achieves a unique minimum on it. * Pythagorean inequality. By cosine law, D_f(w, v) - D_f(w, P_W(v)) - D_f(P_W(v), v) = \langle \nabla_y D_f(y, v), _ , w - P_W(v)\rangle, which must be \geq 0, since P_W(v) minimizes D_f(\cdot, v) in W, and W is convex. * Pythagorean equality when P_W(v) is in the relative interior of X. If \langle \nabla_y D_f(y, v), _, w - P_W(v)\rangle > 0, then since w is in the relative interior, we can move from P_W(v) in the direction opposite of w, to decrease D_f(y, v) , contradiction. Thus \langle \nabla_y D_f(y, v), _, w - P_W(v)\rangle = 0.


Classification theorems

* The only symmetric Bregman divergences on X\subset \R^n are squared generalized Euclidean distances (
Mahalanobis distance The Mahalanobis distance is a distance measure, measure of the distance between a point P and a probability distribution D, introduced by Prasanta Chandra Mahalanobis, P. C. Mahalanobis in 1936. The mathematical details of Mahalanobis distance ...
), that is, D_f(y, x) = (y-x)^T A (y-x) for some positive definite A. The following two characterizations are for divergences on \Gamma_n, the set of all probability measures on \, with n \geq 2. Define a divergence on \Gamma_n as any function of type D: \Gamma_n \times \Gamma_n \to , \infty/math>, such that D(x, x) = 0 for all x\in\Gamma_n, then: *The only divergence on \Gamma_n that is both a Bregman divergence and an
f-divergence In probability theory, an f-divergence is a certain type of function D_f(P\, Q) that measures the difference between two probability distributions P and Q. Many common divergences, such as KL-divergence, Hellinger distance, and total variation ...
is the
Kullback–Leibler divergence In mathematical statistics, the Kullback–Leibler (KL) divergence (also called relative entropy and I-divergence), denoted D_\text(P \parallel Q), is a type of statistical distance: a measure of how much a model probability distribution is diff ...
. *If n \geq 3, then any Bregman divergence on \Gamma_n that satisfies the data processing inequality must be the Kullback–Leibler divergence. (In fact, a weaker assumption of "sufficiency" is enough.) Counterexamples exist when n = 2. Given a Bregman divergence D_F, its "opposite", defined by D_F^*(v, w) = D_F(w, v), is generally not a Bregman divergence. For example, the Kullback-Leiber divergence is both a Bregman divergence and an f-divergence. Its reverse is also an f-divergence, but by the above characterization, the reverse KL divergence cannot be a Bregman divergence.


Examples

* The squared
Mahalanobis distance The Mahalanobis distance is a distance measure, measure of the distance between a point P and a probability distribution D, introduced by Prasanta Chandra Mahalanobis, P. C. Mahalanobis in 1936. The mathematical details of Mahalanobis distance ...
D_F(x,y)=\tfrac(x-y)^T Q (x-y) is generated by the convex
quadratic form In mathematics, a quadratic form is a polynomial with terms all of degree two (" form" is another name for a homogeneous polynomial). For example, 4x^2 + 2xy - 3y^2 is a quadratic form in the variables and . The coefficients usually belong t ...
F(x) = \tfrac x^T Q x. * The canonical example of a Bregman distance is the squared Euclidean distance D_F(x,y) = \, x - y\, ^2. It results as the special case of the above, when Q is the identity, i.e. for F(x) = \, x\, ^2. As noted, affine differences, i.e. the lower orders added in F, are irrelevant to D_F. * The generalized Kullback–Leibler divergence ::D_F(p, q) = \sum_i p(i) \log \frac - \sum p(i) + \sum q(i) :is generated by the negative
entropy Entropy is a scientific concept, most commonly associated with states of disorder, randomness, or uncertainty. The term and the concept are used in diverse fields, from classical thermodynamics, where it was first recognized, to the micros ...
function ::F(p) = \sum_i p(i)\log p(i) :When restricted to the simplex, the last two terms cancel, giving the usual
Kullback–Leibler divergence In mathematical statistics, the Kullback–Leibler (KL) divergence (also called relative entropy and I-divergence), denoted D_\text(P \parallel Q), is a type of statistical distance: a measure of how much a model probability distribution is diff ...
for distributions. * The Itakura–Saito distance, ::D_F(p, q) = \sum_i \left(\frac - \log \frac - 1 \right) :is generated by the convex function ::F(p) = - \sum_i \log p(i)


Generalizing projective duality

A key tool in computational geometry is the idea of projective duality, which maps points to hyperplanes and vice versa, while preserving incidence and above-below relationships. There are numerous analytical forms of the projective dual: one common form maps the point p = (p_1, \ldots p_d) to the hyperplane x_ = \sum_1^d 2p_i x_i. This mapping can be interpreted (identifying the hyperplane with its normal) as the convex conjugate mapping that takes the point p to its dual point p^* = \nabla F(p), where ''F'' defines the ''d''-dimensional paraboloid x_ = \sum x_i^2. If we now replace the paraboloid by an arbitrary convex function, we obtain a different dual mapping that retains the incidence and above-below properties of the standard projective dual. This implies that natural dual concepts in computational geometry like Voronoi diagrams and Delaunay triangulations retain their meaning in distance spaces defined by an arbitrary Bregman divergence. Thus, algorithms from "normal" geometry extend directly to these spaces (Boissonnat, Nielsen and Nock, 2010)


Generalization of Bregman divergences

Bregman divergences can be interpreted as limit cases of skewed Jensen divergences (see Nielsen and Boltz, 2011). Jensen divergences can be generalized using comparative convexity, and limit cases of these skewed Jensen divergences generalizations yields generalized Bregman divergence (see Nielsen and Nock, 2017). The Bregman chord divergence is obtained by taking a chord instead of a tangent line.


Bregman divergence on other objects

Bregman divergences can also be defined between matrices, between functions, and between measures (distributions). Bregman divergences between matrices include the Stein's loss and von Neumann entropy. Bregman divergences between functions include total squared error, relative entropy, and squared bias; see the references by Frigyik et al. below for definitions and properties. Similarly Bregman divergences have also been defined over sets, through a submodular set function which is known as the discrete analog of a
convex function In mathematics, a real-valued function is called convex if the line segment between any two distinct points on the graph of a function, graph of the function lies above or on the graph between the two points. Equivalently, a function is conve ...
. The submodular Bregman divergences subsume a number of discrete distance measures, like the
Hamming distance In information theory, the Hamming distance between two String (computer science), strings or vectors of equal length is the number of positions at which the corresponding symbols are different. In other words, it measures the minimum number ...
,
precision and recall In pattern recognition, information retrieval, object detection and classification (machine learning), precision and recall are performance metrics that apply to data retrieved from a collection, corpus or sample space. Precision (also calle ...
,
mutual information In probability theory and information theory, the mutual information (MI) of two random variables is a measure of the mutual Statistical dependence, dependence between the two variables. More specifically, it quantifies the "Information conten ...
and some other set based distance measures (see Iyer & Bilmes, 2012 for more details and properties of the submodular Bregman.) For a list of common matrix Bregman divergences, see Table 15.1 in.


Applications

In machine learning, Bregman divergences are used to calculate the bi-tempered logistic loss, performing better than the softmax function with noisy datasets.Ehsan Amid, Manfred K. Warmuth, Rohan Anil, Tomer Koren (2019). "Robust Bi-Tempered Logistic Loss Based on Bregman Divergences". Conference on Neural Information Processing Systems. pp. 14987-14996
pdf
/ref> Bregman divergence is used in the formulation of mirror descent, which includes optimization algorithms used in machine learning such as gradient descent and the hedge algorithm.


References

* * * * * * * * * * * * * {{refend Geometric algorithms Statistical distance