HOME

TheInfoList



OR:

In mathematical statistics, the Fisher information (sometimes simply called information) is a way of measuring the amount of
information Information is an abstract concept that refers to that which has the power to inform. At the most fundamental level information pertains to the interpretation of that which may be sensed. Any natural process that is not completely random, ...
that an observable
random variable A random variable (also called random quantity, aleatory variable, or stochastic variable) is a mathematical formalization of a quantity or object which depends on random events. It is a mapping or a function from possible outcomes (e.g., the po ...
''X'' carries about an unknown parameter ''θ'' of a distribution that models ''X''. Formally, it is the
variance In probability theory and statistics, variance is the expectation of the squared deviation of a random variable from its population mean or sample mean. Variance is a measure of dispersion, meaning it is a measure of how far a set of numbe ...
of the score, or the
expected value In probability theory, the expected value (also called expectation, expectancy, mathematical expectation, mean, average, or first moment) is a generalization of the weighted average. Informally, the expected value is the arithmetic mean of a ...
of the
observed information In statistics, the observed information, or observed Fisher information, is the negative of the second derivative (the Hessian matrix) of the "log-likelihood" (the logarithm of the likelihood function). It is a sample-based version of the Fisher i ...
. In
Bayesian statistics Bayesian statistics is a theory in the field of statistics based on the Bayesian interpretation of probability where probability expresses a ''degree of belief'' in an event. The degree of belief may be based on prior knowledge about the event, ...
, the
asymptotic distribution In mathematics and statistics, an asymptotic distribution is a probability distribution that is in a sense the "limiting" distribution of a sequence of distributions. One of the main uses of the idea of an asymptotic distribution is in providing ap ...
of the posterior mode depends on the Fisher information and not on the
prior Prior (or prioress) is an ecclesiastical title for a superior in some religious orders. The word is derived from the Latin for "earlier" or "first". Its earlier generic usage referred to any monastic superior. In abbeys, a prior would be low ...
(according to the
Bernstein–von Mises theorem In Bayesian inference, the Bernstein-von Mises theorem provides the basis for using Bayesian credible sets for confidence statements in parametric models. It states that under some conditions, a posterior distribution converges in the limit of in ...
, which was anticipated by
Laplace Pierre-Simon, marquis de Laplace (; ; 23 March 1749 – 5 March 1827) was a French scholar and polymath whose work was important to the development of engineering, mathematics, statistics, physics, astronomy, and philosophy. He summarized ...
for
exponential families In probability and statistics, an exponential family is a parametric set of probability distributions of a certain form, specified below. This special form is chosen for mathematical convenience, including the enabling of the user to calculat ...
). The role of the Fisher information in the asymptotic theory of
maximum-likelihood estimation In statistics, maximum likelihood estimation (MLE) is a method of estimating the parameters of an assumed probability distribution, given some observed data. This is achieved by maximizing a likelihood function so that, under the assumed statis ...
was emphasized by the statistician
Ronald Fisher Sir Ronald Aylmer Fisher (17 February 1890 – 29 July 1962) was a British polymath who was active as a mathematician, statistician, biologist, geneticist, and academic. For his work in statistics, he has been described as "a genius who ...
(following some initial results by
Francis Ysidro Edgeworth Francis Ysidro Edgeworth (8 February 1845 – 13 February 1926) was an Anglo-Irish philosopher and political economist who made significant contributions to the methods of statistics during the 1880s. From 1891 onward, he was appointed the ...
). The Fisher information is also used in the calculation of the
Jeffreys prior In Bayesian probability, the Jeffreys prior, named after Sir Harold Jeffreys, is a non-informative (objective) prior distribution for a parameter space; its density function is proportional to the square root of the determinant of the Fisher infor ...
, which is used in Bayesian statistics. The Fisher information matrix is used to calculate the covariance matrices associated with
maximum-likelihood In statistics, maximum likelihood estimation (MLE) is a method of estimating the parameters of an assumed probability distribution, given some observed data. This is achieved by maximizing a likelihood function so that, under the assumed statis ...
estimates. It can also be used in the formulation of test statistics, such as the
Wald test In statistics, the Wald test (named after Abraham Wald) assesses constraints on statistical parameters based on the weighted distance between the unrestricted estimate and its hypothesized value under the null hypothesis, where the weight is the ...
. Statistical systems of a scientific nature (physical, biological, etc.) whose likelihood functions obey shift invariance have been shown to obey maximum Fisher information. The level of the maximum depends upon the nature of the system constraints.


Definition

The Fisher information is a way of measuring the amount of information that an observable
random variable A random variable (also called random quantity, aleatory variable, or stochastic variable) is a mathematical formalization of a quantity or object which depends on random events. It is a mapping or a function from possible outcomes (e.g., the po ...
X carries about an unknown
parameter A parameter (), generally, is any characteristic that can help in defining or classifying a particular system (meaning an event, project, object, situation, etc.). That is, a parameter is an element of a system that is useful, or critical, when ...
\theta upon which the probability of X depends. Let f(X;\theta) be the
probability density function In probability theory, a probability density function (PDF), or density of a continuous random variable, is a function whose value at any given sample (or point) in the sample space (the set of possible values taken by the random variable) can ...
(or probability mass function) for X conditioned on the value of \theta. It describes the probability that we observe a given outcome of X, ''given'' a known value of \theta. If f is sharply peaked with respect to changes in \theta, it is easy to indicate the "correct" value of \theta from the data, or equivalently, that the data X provides a lot of information about the parameter \theta. If f is flat and spread-out, then it would take many samples of X to estimate the actual "true" value of \theta that ''would'' be obtained using the entire population being sampled. This suggests studying some kind of variance with respect to \theta. Formally, the
partial derivative In mathematics, a partial derivative of a function of several variables is its derivative with respect to one of those variables, with the others held constant (as opposed to the total derivative, in which all variables are allowed to vary). Pa ...
with respect to \theta of the
natural logarithm The natural logarithm of a number is its logarithm to the base of the mathematical constant , which is an irrational and transcendental number approximately equal to . The natural logarithm of is generally written as , , or sometimes, if ...
of the
likelihood function The likelihood function (often simply called the likelihood) represents the probability of random variable realizations conditional on particular values of the statistical parameters. Thus, when evaluated on a given sample, the likelihood funct ...
is called the '' score''. Under certain regularity conditions, if \theta is the true parameter (i.e. X is actually distributed as f(X;\theta)), it can be shown that the
expected value In probability theory, the expected value (also called expectation, expectancy, mathematical expectation, mean, average, or first moment) is a generalization of the weighted average. Informally, the expected value is the arithmetic mean of a ...
(the first moment) of the score, evaluated at the true parameter value \theta, is 0: :\begin \operatorname \left \theta \right = &\int_ \frac f(x;\theta)\,dx \\ pt = &\frac \int_ f(x; \theta)\,dx \\ pt = &\frac 1 \\ = & 0. \end The Fisher information is defined to be the
variance In probability theory and statistics, variance is the expectation of the squared deviation of a random variable from its population mean or sample mean. Variance is a measure of dispersion, meaning it is a measure of how far a set of numbe ...
of the score: : \mathcal(\theta) = \operatorname \left \theta \right= \int_ \left(\frac \log f(x;\theta)\right)^2 f(x; \theta)\,dx, Note that 0 \leq \mathcal(\theta). A random variable carrying high Fisher information implies that the absolute value of the score is often high. The Fisher information is not a function of a particular observation, as the random variable ''X'' has been averaged out. If is twice differentiable with respect to ''θ'', and under certain regularity conditions, then the Fisher information may also be written as : \mathcal(\theta) = - \operatorname \left \theta \right since :\frac \log f(X;\theta) = \frac - \left( \frac \right)^2 = \frac - \left( \frac \log f(X;\theta)\right)^2 and : \operatorname \left \theta \right= \frac \int_ f(x;\theta)\,dx = 0. Thus, the Fisher information may be seen as the curvature of the support curve (the graph of the log-likelihood). Near the maximum likelihood estimate, low Fisher information therefore indicates that the maximum appears "blunt", that is, the maximum is shallow and there are many nearby values with a similar log-likelihood. Conversely, high Fisher information indicates that the maximum is sharp.


Regularity conditions

The regularity conditions are as follows: # The partial derivative of ''f''(''X''; ''θ'') with respect to ''θ'' exists
almost everywhere In measure theory (a branch of mathematical analysis), a property holds almost everywhere if, in a technical sense, the set for which the property holds takes up nearly all possibilities. The notion of "almost everywhere" is a companion notion to ...
. (It can fail to exist on a null set, as long as this set does not depend on ''θ''.) # The integral of ''f''(''X''; ''θ'') can be differentiated under the integral sign with respect to ''θ''. # The
support Support may refer to: Arts, entertainment, and media * Supporting character Business and finance * Support (technical analysis) * Child support * Customer support * Income Support Construction * Support (structure), or lateral support, a ...
of ''f''(''X''; ''θ'') does not depend on ''θ''. If ''θ'' is a vector then the regularity conditions must hold for every component of ''θ''. It is easy to find an example of a density that does not satisfy the regularity conditions: The density of a Uniform(0, ''θ'') variable fails to satisfy conditions 1 and 3. In this case, even though the Fisher information can be computed from the definition, it will not have the properties it is typically assumed to have.


In terms of likelihood

Because the
likelihood The likelihood function (often simply called the likelihood) represents the probability of random variable realizations conditional on particular values of the statistical parameters. Thus, when evaluated on a given sample, the likelihood functi ...
of ''θ'' given ''X'' is always proportional to the probability ''f''(''X''; ''θ''), their logarithms necessarily differ by a constant that is independent of ''θ'', and the derivatives of these logarithms with respect to ''θ'' are necessarily equal. Thus one can substitute in a log-likelihood ''l''(''θ''; ''X'') instead of in the definitions of Fisher Information.


Samples of any size

The value ''X'' can represent a single sample drawn from a single distribution or can represent a collection of samples drawn from a collection of distributions. If there are ''n'' samples and the corresponding ''n'' distributions are
statistically independent Independence is a fundamental notion in probability theory, as in statistics and the theory of stochastic processes. Two events are independent, statistically independent, or stochastically independent if, informally speaking, the occurrence of ...
then the Fisher information will necessarily be the sum of the single-sample Fisher information values, one for each single sample from its distribution. In particular, if the ''n'' distributions are independent and identically distributed then the Fisher information will necessarily be ''n'' times the Fisher information of a single sample from the common distribution.


Informal derivation of the Cramér–Rao bound

The
Cramér–Rao bound In estimation theory and statistics, the Cramér–Rao bound (CRB) expresses a lower bound on the variance of unbiased estimators of a deterministic (fixed, though unknown) parameter, the variance of any such estimator is at least as high as the in ...
states that the inverse of the Fisher information is a lower bound on the variance of any unbiased estimator of ''θ''. H.L. Van Trees (1968) and B. Roy Frieden (2004) provide the following method of deriving the
Cramér–Rao bound In estimation theory and statistics, the Cramér–Rao bound (CRB) expresses a lower bound on the variance of unbiased estimators of a deterministic (fixed, though unknown) parameter, the variance of any such estimator is at least as high as the in ...
, a result which describes use of the Fisher information. Informally, we begin by considering an unbiased estimator \hat\theta(X). Mathematically, "unbiased" means that : \operatorname\left \theta \right= \int \left(\hat\theta(x) - \theta\right) \, f(x ;\theta) \, dx = 0 \text \theta. This expression is zero independent of ''θ'', so its partial derivative with respect to ''θ'' must also be zero. By the product rule, this partial derivative is also equal to : 0 = \frac \int \left(\hat\theta(x) - \theta \right) \, f(x ;\theta) \,dx = \int \left(\hat\theta(x)-\theta\right) \frac \, dx - \int f \,dx. For each ''θ'', the likelihood function is a probability density function, and therefore \int f\,dx = 1. By using the
chain rule In calculus, the chain rule is a formula that expresses the derivative of the composition of two differentiable functions and in terms of the derivatives of and . More precisely, if h=f\circ g is the function such that h(x)=f(g(x)) for every , ...
on the partial derivative of \log f and then dividing and multiplying by f(x;\theta), one can verify that :\frac = f \, \frac. Using these two facts in the above, we get : \int \left(\hat\theta-\theta\right) f \, \frac \, dx = 1. Factoring the integrand gives : \int \left(\left(\hat\theta-\theta\right) \sqrt \right) \left( \sqrt \, \frac \right) \, dx = 1. Squaring the expression in the integral, the
Cauchy–Schwarz inequality The Cauchy–Schwarz inequality (also called Cauchy–Bunyakovsky–Schwarz inequality) is considered one of the most important and widely used inequalities in mathematics. The inequality for sums was published by . The corresponding inequality f ...
yields : 1 = \biggl( \int \left left(\hat\theta-\theta\right) \sqrt \right\cdot \left \sqrt \, \frac \right\, dx \biggr)^2 \le \left \int \left(\hat\theta - \theta\right)^2 f \, dx \right\cdot \left \int \left( \frac \right)^2 f \, dx \right The second bracketed factor is defined to be the Fisher Information, while the first bracketed factor is the expected mean-squared error of the estimator \hat\theta. By rearranging, the inequality tells us that : \operatorname\left(\hat\theta\right) \geq \frac. In other words, the precision to which we can estimate ''θ'' is fundamentally limited by the Fisher information of the likelihood function.


Single-parameter Bernoulli experiment

A Bernoulli trial is a random variable with two possible outcomes, "success" and "failure", with success having a probability of ''θ''. The outcome can be thought of as determined by a coin toss, with the probability of heads being ''θ'' and the probability of tails being . Let ''X'' be a Bernoulli trial. The Fisher information contained in ''X'' may be calculated to be :\begin \mathcal(\theta) &= -\operatorname\left \theta\right\\ pt &= -\operatorname\left \theta\right\\ pt &= \operatorname\left \theta\right\\ pt &= \frac + \frac \\ pt &= \frac. \end Because Fisher information is additive, the Fisher information contained in ''n'' independent Bernoulli trials is therefore :\mathcal(\theta) = \frac. This is the reciprocal of the
variance In probability theory and statistics, variance is the expectation of the squared deviation of a random variable from its population mean or sample mean. Variance is a measure of dispersion, meaning it is a measure of how far a set of numbe ...
of the mean number of successes in ''n'' Bernoulli trials, so in this case, the Cramér–Rao bound is an equality.


Matrix form

When there are ''N'' parameters, so that ''θ'' is an
vector Vector most often refers to: *Euclidean vector, a quantity with a magnitude and a direction *Vector (epidemiology), an agent that carries and transmits an infectious pathogen into another living organism Vector may also refer to: Mathematic ...
\theta = \begin\theta_1 & \theta_2 & \dots & \theta_N\end^\textsf, then the Fisher information takes the form of an
matrix Matrix most commonly refers to: * ''The Matrix'' (franchise), an American media franchise ** ''The Matrix'', a 1999 science-fiction action film ** "The Matrix", a fictional setting, a virtual reality environment, within ''The Matrix'' (franchis ...
. This matrix is called the Fisher information matrix (FIM) and has typical element : \bigl mathcal(\theta)\bigr = \operatorname\left \theta\right The FIM is a
positive semidefinite matrix In mathematics, a symmetric matrix M with real entries is positive-definite if the real number z^\textsfMz is positive for every nonzero real column vector z, where z^\textsf is the transpose of More generally, a Hermitian matrix (that is, a ...
. If it is positive definite, then it defines a
Riemannian metric In differential geometry, a Riemannian manifold or Riemannian space , so called after the German mathematician Bernhard Riemann, is a real, smooth manifold ''M'' equipped with a positive-definite inner product ''g'p'' on the tangent space '' ...
on the ''N''-
dimension In physics and mathematics, the dimension of a mathematical space (or object) is informally defined as the minimum number of coordinates needed to specify any point within it. Thus, a line has a dimension of one (1D) because only one coordin ...
al
parameter space The parameter space is the space of possible parameter values that define a particular mathematical model, often a subset of finite-dimensional Euclidean space. Often the parameters are inputs of a function, in which case the technical term for the ...
. The topic
information geometry Information geometry is an interdisciplinary field that applies the techniques of differential geometry to study probability theory and statistics. It studies statistical manifolds, which are Riemannian manifolds whose points correspond to p ...
uses this to connect Fisher information to
differential geometry Differential geometry is a mathematical discipline that studies the geometry of smooth shapes and smooth spaces, otherwise known as smooth manifolds. It uses the techniques of differential calculus, integral calculus, linear algebra and mul ...
, and in that context, this metric is known as the
Fisher information metric In information geometry, the Fisher information metric is a particular Riemannian metric which can be defined on a smooth statistical manifold, ''i.e.'', a smooth manifold whose points are probability measures defined on a common probability spac ...
. Under certain regularity conditions, the Fisher information matrix may also be written as : \bigl mathcal(\theta) \bigr = -\operatorname\left \theta\right,. The result is interesting in several ways: *It can be derived as the Hessian of the
relative entropy Relative may refer to: General use * Kinship and family, the principle binding the most basic social units society. If two people are connected by circumstances of birth, they are said to be ''relatives'' Philosophy *Relativism, the concept tha ...
. *It can be used as a Riemannian metric for defining Fisher-Rao geometry when it is positive-definite. *It can be understood as a metric induced from the
Euclidean metric In mathematics, the Euclidean distance between two points in Euclidean space is the length of a line segment between the two points. It can be calculated from the Cartesian coordinates of the points using the Pythagorean theorem, therefore o ...
, after appropriate change of variable. *In its complex-valued form, it is the
Fubini–Study metric In mathematics, the Fubini–Study metric is a Kähler metric on projective Hilbert space, that is, on a complex projective space CP''n'' endowed with a Hermitian form. This metric was originally described in 1904 and 1905 by Guido Fubini and E ...
. *It is the key part of the proof of
Wilks' theorem In statistics Wilks' theorem offers an asymptotic distribution of the log-likelihood ratio statistic, which can be used to produce confidence intervals for maximum-likelihood estimates or as a test statistic for performing the likelihood-ratio test ...
, which allows confidence region estimates for
maximum likelihood estimation In statistics, maximum likelihood estimation (MLE) is a method of estimating the parameters of an assumed probability distribution, given some observed data. This is achieved by maximizing a likelihood function so that, under the assumed statis ...
(for those conditions for which it applies) without needing the
Likelihood Principle In statistics, the likelihood principle is the proposition that, given a statistical model, all the evidence in a sample relevant to model parameters is contained in the likelihood function. A likelihood function arises from a probability density ...
. *In cases where the analytical calculations of the FIM above are difficult, it is possible to form an average of easy Monte Carlo estimates of the Hessian of the negative log-likelihood function as an estimate of the FIM. The estimates may be based on values of the negative log-likelihood function or the gradient of the negative log-likelihood function; no analytical calculation of the Hessian of the negative log-likelihood function is needed.


Orthogonal parameters

We say that two parameters ''θi'' and ''θj'' are orthogonal if the element of the ''i''th row and ''j''th column of the Fisher information matrix is zero. Orthogonal parameters are easy to deal with in the sense that their maximum likelihood estimates are independent and can be calculated separately. When dealing with research problems, it is very common for the researcher to invest some time searching for an orthogonal parametrization of the densities involved in the problem.


Singular statistical model

If the Fisher information matrix is positive definite for all , then the corresponding
statistical model A statistical model is a mathematical model that embodies a set of statistical assumptions concerning the generation of sample data (and similar data from a larger population). A statistical model represents, often in considerably idealized form, ...
is said to be ''regular''; otherwise, the statistical model is said to be ''singular''. Examples of singular statistical models include the following: normal mixtures, binomial mixtures, multinomial mixtures, Bayesian networks, neural networks, radial basis functions, hidden Markov models, stochastic context-free grammars, reduced rank regressions, Boltzmann machines. In
machine learning Machine learning (ML) is a field of inquiry devoted to understanding and building methods that 'learn', that is, methods that leverage data to improve performance on some set of tasks. It is seen as a part of artificial intelligence. Machin ...
, if a statistical model is devised so that it extracts hidden structure from a random phenomenon, then it naturally becomes singular.


Multivariate normal distribution

The FIM for a ''N''-variate
multivariate normal distribution In probability theory and statistics, the multivariate normal distribution, multivariate Gaussian distribution, or joint normal distribution is a generalization of the one-dimensional (univariate) normal distribution to higher dimensions. One ...
, \,X \sim N\left(\mu(\theta),\, \Sigma(\theta)\right) has a special form. Let the ''K''-dimensional vector of parameters be \theta = \begin \theta_1 & \dots & \theta_K \end^\textsf and the vector of random normal variables be X = \begin X_1 & \dots & X_N \end^\textsf. Assume that the mean values of these random variables are \,\mu(\theta) = \begin \mu_1(\theta) & \dots & \mu_N(\theta) \end^\textsf, and let \,\Sigma(\theta) be the
covariance matrix In probability theory and statistics, a covariance matrix (also known as auto-covariance matrix, dispersion matrix, variance matrix, or variance–covariance matrix) is a square matrix giving the covariance between each pair of elements o ...
. Then, for 1 \le m,\, n \le K, the (''m'', ''n'') entry of the FIM is: : \mathcal_ = \frac\Sigma^ \frac + \frac\operatorname\left( \Sigma^\frac \Sigma^\frac \right), where (\cdot)^\textsf denotes the
transpose In linear algebra, the transpose of a matrix is an operator which flips a matrix over its diagonal; that is, it switches the row and column indices of the matrix by producing another matrix, often denoted by (among other notations). The t ...
of a vector, \operatorname(\cdot) denotes the
trace Trace may refer to: Arts and entertainment Music * ''Trace'' (Son Volt album), 1995 * ''Trace'' (Died Pretty album), 1993 * Trace (band), a Dutch progressive rock band * ''The Trace'' (album) Other uses in arts and entertainment * ''Trace'' ...
of a
square matrix In mathematics, a square matrix is a matrix with the same number of rows and columns. An ''n''-by-''n'' matrix is known as a square matrix of order Any two square matrices of the same order can be added and multiplied. Square matrices are ofte ...
, and: : \begin \frac &= \begin \dfrac & \dfrac & \cdots & \dfrac \end^\textsf; \\ pt \dfrac &= \begin \dfrac & \dfrac & \cdots & \dfrac \\ pt \dfrac & \dfrac & \cdots & \dfrac \\ \vdots & \vdots & \ddots & \vdots \\ \dfrac & \dfrac & \cdots & \dfrac \end. \end Note that a special, but very common, case is the one where \Sigma(\theta) = \Sigma, a constant. Then : \mathcal_ = \frac\Sigma^ \frac.\ In this case the Fisher information matrix may be identified with the coefficient matrix of the
normal equations In statistics, ordinary least squares (OLS) is a type of linear least squares method for choosing the unknown parameters in a linear regression model (with fixed level-one effects of a linear function of a set of explanatory variables) by the prin ...
of least squares estimation theory. Another special case occurs when the mean and covariance depend on two different vector parameters, say, ''β'' and ''θ''. This is especially popular in the analysis of spatial data, which often uses a linear model with correlated residuals. In this case, : \mathcal(\beta, \theta) = \operatorname\left(\mathcal(\beta), \mathcal(\theta)\right) where : \begin \mathcal &= \frac \Sigma^ \frac, \\ pt \mathcal &= \frac\operatorname\left(\Sigma^ \frac\frac\right) \end


Properties


Chain rule

Similar to the
entropy Entropy is a scientific concept, as well as a measurable physical property, that is most commonly associated with a state of disorder, randomness, or uncertainty. The term and the concept are used in diverse fields, from classical thermodyna ...
or
mutual information In probability theory and information theory, the mutual information (MI) of two random variables is a measure of the mutual dependence between the two variables. More specifically, it quantifies the " amount of information" (in units such a ...
, the Fisher information also possesses a chain rule decomposition. In particular, if ''X'' and ''Y'' are jointly distributed random variables, it follows that: :\mathcal_(\theta) = \mathcal_X(\theta) + \mathcal_(\theta), where \mathcal_(\theta) = \operatorname_ \left \mathcal_(\theta) \right and \mathcal_(\theta) is the Fisher information of ''Y'' relative to \theta calculated with respect to the conditional density of ''Y'' given a specific value ''X'' = ''x''. As a special case, if the two random variables are
independent Independent or Independents may refer to: Arts, entertainment, and media Artist groups * Independents (artist group), a group of modernist painters based in the New Hope, Pennsylvania, area of the United States during the early 1930s * Independ ...
, the information yielded by the two random variables is the sum of the information from each random variable separately: :\mathcal_(\theta) = \mathcal_X(\theta) + \mathcal_Y(\theta). Consequently, the information in a random sample of ''n'' independent and identically distributed observations is ''n'' times the information in a sample of size 1.


F-divergence

Given a convex function f: , \infty)\to(-\infty, \infty/math> that f(x) is finite for all x > 0, f(1)=0, and f(0)=\lim_ f(t) , (which could be infinite), it defines an f-divergence D_f. Then if f is strictly convex at 1, then locally at \theta\in\Theta, the Fisher information matrix is a metric, in the sense that(\delta\theta)^T I(\theta) (\delta\theta) = \fracD_f(P_ \, P_)where P_\theta is the distribution parametrized by \theta. That is, it's the distribution with pdf f(x; \theta). In this form, it is clear that the Fisher information matrix is a Riemannian metric, and varies correctly under a change of variables. (see section on Reparametrization)


Sufficient statistic

The information provided by a
sufficient statistic In statistics, a statistic is ''sufficient'' with respect to a statistical model and its associated unknown parameter if "no other statistic that can be calculated from the same sample provides any additional information as to the value of the par ...
is the same as that of the sample ''X''. This may be seen by using Neyman's factorization criterion for a sufficient statistic. If ''T''(''X'') is sufficient for ''θ'', then :f(X; \theta) = g(T(X), \theta) h(X) for some functions ''g'' and ''h''. The independence of ''h''(''X'') from ''θ'' implies :\frac \log \left (X; \theta)\right= \frac \log\left (T(X);\theta)\right and the equality of information then follows from the definition of Fisher information. More generally, if is a statistic, then : \mathcal_T(\theta) \leq \mathcal_X(\theta) with equality
if and only if In logic and related fields such as mathematics and philosophy, "if and only if" (shortened as "iff") is a biconditional logical connective between statements, where either both statements are true or both are false. The connective is bic ...
''T'' is a
sufficient statistic In statistics, a statistic is ''sufficient'' with respect to a statistical model and its associated unknown parameter if "no other statistic that can be calculated from the same sample provides any additional information as to the value of the par ...
.


Reparametrization

The Fisher information depends on the parametrization of the problem. If ''θ'' and ''η'' are two scalar parametrizations of an estimation problem, and ''θ'' is a
continuously differentiable In mathematics, a differentiable function of one real variable is a function whose derivative exists at each point in its domain. In other words, the graph of a differentiable function has a non-vertical tangent line at each interior point in ...
function of ''η'', then :_\eta(\eta) = _\theta(\theta(\eta)) \left( \frac \right)^2 where _\eta and _\theta are the Fisher information measures of ''η'' and ''θ'', respectively. In the vector case, suppose and are ''k''-vectors which parametrize an estimation problem, and suppose that is a continuously differentiable function of , then, :_() = ^\textsf _ (()) where the (''i'', ''j'')th element of the ''k'' × ''k''
Jacobian matrix In vector calculus, the Jacobian matrix (, ) of a vector-valued function of several variables is the matrix of all its first-order partial derivatives. When this matrix is square, that is, when the function takes the same number of variables a ...
\boldsymbol J is defined by : J_ = \frac, and where ^\textsf is the matrix transpose of . In
information geometry Information geometry is an interdisciplinary field that applies the techniques of differential geometry to study probability theory and statistics. It studies statistical manifolds, which are Riemannian manifolds whose points correspond to p ...
, this is seen as a change of coordinates on a
Riemannian manifold In differential geometry, a Riemannian manifold or Riemannian space , so called after the German mathematician Bernhard Riemann, is a real, smooth manifold ''M'' equipped with a positive-definite inner product ''g'p'' on the tangent space ' ...
, and the intrinsic properties of curvature are unchanged under different parametrizations. In general, the Fisher information matrix provides a Riemannian metric (more precisely, the Fisher–Rao metric) for the manifold of thermodynamic states, and can be used as an information-geometric complexity measure for a classification of
phase transitions In chemistry, thermodynamics, and other related fields, a phase transition (or phase change) is the physical process of transition between one state of a medium and another. Commonly the term is used to refer to changes among the basic states of ...
, e.g., the scalar curvature of the thermodynamic metric tensor diverges at (and only at) a phase transition point. In the thermodynamic context, the Fisher information matrix is directly related to the rate of change in the corresponding order parameters. In particular, such relations identify second-order phase transitions via divergences of individual elements of the Fisher information matrix.


Isoperimetric inequality

The Fisher information matrix plays a role in an inequality like the
isoperimetric inequality In mathematics, the isoperimetric inequality is a geometric inequality involving the perimeter of a set and its volume. In n-dimensional space \R^n the inequality lower bounds the surface area or perimeter \operatorname(S) of a set S\subset\R^n b ...
. Of all probability distributions with a given entropy, the one whose Fisher information matrix has the smallest trace is the Gaussian distribution. This is like how, of all bounded sets with a given volume, the sphere has the smallest surface area. The proof involves taking a multivariate random variable X with density function f and adding a location parameter to form a family of densities \. Then, by analogy with the
Minkowski–Steiner formula In mathematics, the Minkowski–Steiner formula is a formula relating the surface area and volume of compact subsets of Euclidean space. More precisely, it defines the surface area as the "derivative" of enclosed volume in an appropriate sense. The ...
, the "surface area" of X is defined to be :S(X) = \lim_ \frac where Z_\varepsilon is a Gaussian variable with covariance matrix \varepsilon I. The name "surface area" is apt because the entropy power e^ is the volume of the "effective support set," so S(X) is the "derivative" of the volume of the effective support set, much like the Minkowski-Steiner formula. The remainder of the proof uses the
entropy power inequality In information theory, the entropy power inequality (EPI) is a result that relates to so-called "entropy power" of random variables. It shows that the entropy power of suitably well-behaved random variables is a superadditive function. The entrop ...
, which is like the Brunn–Minkowski inequality. The trace of the Fisher information matrix is found to be a factor of S(X).


Applications


Optimal design of experiments

Fisher information is widely used in
optimal experimental design In the design of experiments, optimal designs (or optimum designs) are a class of experimental designs that are optimal with respect to some statistical criterion. The creation of this field of statistics has been credited to Danish statist ...
. Because of the reciprocity of estimator-variance and Fisher information, ''minimizing'' the ''variance'' corresponds to ''maximizing'' the ''information''. When the
linear Linearity is the property of a mathematical relationship (''function'') that can be graphically represented as a straight line. Linearity is closely related to '' proportionality''. Examples in physics include rectilinear motion, the linear re ...
(or linearized)
statistical model A statistical model is a mathematical model that embodies a set of statistical assumptions concerning the generation of sample data (and similar data from a larger population). A statistical model represents, often in considerably idealized form, ...
has several
parameter A parameter (), generally, is any characteristic that can help in defining or classifying a particular system (meaning an event, project, object, situation, etc.). That is, a parameter is an element of a system that is useful, or critical, when ...
s, the
mean There are several kinds of mean in mathematics, especially in statistics. Each mean serves to summarize a given group of data, often to better understand the overall value (magnitude and sign) of a given data set. For a data set, the ''arith ...
of the parameter estimator is a
vector Vector most often refers to: *Euclidean vector, a quantity with a magnitude and a direction *Vector (epidemiology), an agent that carries and transmits an infectious pathogen into another living organism Vector may also refer to: Mathematic ...
and its
variance In probability theory and statistics, variance is the expectation of the squared deviation of a random variable from its population mean or sample mean. Variance is a measure of dispersion, meaning it is a measure of how far a set of numbe ...
is a
matrix Matrix most commonly refers to: * ''The Matrix'' (franchise), an American media franchise ** ''The Matrix'', a 1999 science-fiction action film ** "The Matrix", a fictional setting, a virtual reality environment, within ''The Matrix'' (franchis ...
. The inverse of the variance matrix is called the "information matrix". Because the variance of the estimator of a parameter vector is a matrix, the problem of "minimizing the variance" is complicated. Using
statistical theory The theory of statistics provides a basis for the whole range of techniques, in both study design and data analysis, that are used within applications of statistics. The theory covers approaches to statistical-decision problems and to statistica ...
, statisticians compress the information-matrix using real-valued summary statistics; being real-valued functions, these "information criteria" can be maximized. Traditionally, statisticians have evaluated estimators and designs by considering some
summary statistic In descriptive statistics, summary statistics are used to summarize a set of observations, in order to communicate the largest amount of information as simply as possible. Statisticians commonly try to describe the observations in * a measure of ...
of the covariance matrix (of an unbiased estimator), usually with positive real values (like the
determinant In mathematics, the determinant is a scalar value that is a function of the entries of a square matrix. It characterizes some properties of the matrix and the linear map represented by the matrix. In particular, the determinant is nonzero if an ...
or
matrix trace In linear algebra, the trace of a square matrix , denoted , is defined to be the sum of elements on the main diagonal (from the upper left to the lower right) of . The trace is only defined for a square matrix (). It can be proved that the trace o ...
). Working with positive real numbers brings several advantages: If the estimator of a single parameter has a positive variance, then the variance and the Fisher information are both positive real numbers; hence they are members of the convex cone of nonnegative real numbers (whose nonzero members have reciprocals in this same cone). For several parameters, the covariance matrices and information matrices are elements of the convex cone of nonnegative-definite symmetric matrices in a partially
ordered vector space In mathematics, an ordered vector space or partially ordered vector space is a vector space equipped with a partial order that is compatible with the vector space operations. Definition Given a vector space ''X'' over the real numbers R and a pr ...
, under the Loewner (Löwner) order. This cone is closed under matrix addition and inversion, as well as under the multiplication of positive real numbers and matrices. An exposition of matrix theory and Loewner order appears in Pukelsheim. The traditional optimality criteria are the
information Information is an abstract concept that refers to that which has the power to inform. At the most fundamental level information pertains to the interpretation of that which may be sensed. Any natural process that is not completely random, ...
matrix's invariants, in the sense of
invariant theory Invariant theory is a branch of abstract algebra dealing with actions of groups on algebraic varieties, such as vector spaces, from the point of view of their effect on functions. Classically, the theory dealt with the question of explicit descri ...
; algebraically, the traditional optimality criteria are functionals of the
eigenvalue In linear algebra, an eigenvector () or characteristic vector of a linear transformation is a nonzero vector that changes at most by a scalar factor when that linear transformation is applied to it. The corresponding eigenvalue, often denoted ...
s of the (Fisher) information matrix (see
optimal design In the design of experiments, optimal designs (or optimum designs) are a class of experimental designs that are optimal with respect to some statistical criterion. The creation of this field of statistics has been credited to Danish statist ...
).


Jeffreys prior in Bayesian statistics

In
Bayesian statistics Bayesian statistics is a theory in the field of statistics based on the Bayesian interpretation of probability where probability expresses a ''degree of belief'' in an event. The degree of belief may be based on prior knowledge about the event, ...
, the Fisher information is used to calculate the
Jeffreys prior In Bayesian probability, the Jeffreys prior, named after Sir Harold Jeffreys, is a non-informative (objective) prior distribution for a parameter space; its density function is proportional to the square root of the determinant of the Fisher infor ...
, which is a standard, non-informative prior for continuous distribution parameters.


Computational neuroscience

The Fisher information has been used to find bounds on the accuracy of neural codes. In that case, ''X'' is typically the joint responses of many neurons representing a low dimensional variable ''θ'' (such as a stimulus parameter). In particular the role of correlations in the noise of the neural responses has been studied.


Derivation of physical laws

Fisher information plays a central role in a controversial principle put forward by Frieden as the basis of physical laws, a claim that has been disputed.


Machine learning

The Fisher information is used in machine learning techniques such as elastic weight consolidation, which reduces catastrophic forgetting in
artificial neural networks Artificial neural networks (ANNs), usually simply called neural networks (NNs) or neural nets, are computing systems inspired by the biological neural networks that constitute animal brains. An ANN is based on a collection of connected units ...
. Fisher information can be used as an alternative to the Hessian of the loss function in second-order gradient descent network training.


Relation to relative entropy

Fisher information is related to
relative entropy Relative may refer to: General use * Kinship and family, the principle binding the most basic social units society. If two people are connected by circumstances of birth, they are said to be ''relatives'' Philosophy *Relativism, the concept tha ...
. The relative entropy, or Kullback–Leibler divergence, between two distributions p and q can be written as :KL(p:q) = \int p(x)\log\frac \, dx. Now, consider a family of probability distributions f(x; \theta) parametrized by \theta \in \Theta. Then the Kullback–Leibler divergence, between two distributions in the family can be written as :D(\theta,\theta') = KL(p(\cdot;\theta):p(\cdot;\theta'))= \int f(x; \theta)\log\frac \, dx. If \theta is fixed, then the relative entropy between two distributions of the same family is minimized at \theta'=\theta. For \theta' close to \theta, one may expand the previous expression in a series up to second order: :D(\theta,\theta') = \frac(\theta' - \theta)^\textsf \left(\frac D(\theta,\theta')\right)_(\theta' - \theta) + o\left( (\theta'-\theta)^2 \right) But the second order derivative can be written as : \left(\frac D(\theta,\theta')\right)_ = - \int f(x; \theta) \left( \frac \log(f(x; \theta'))\right)_ \, dx = mathcal(\theta). Thus the Fisher information represents the
curvature In mathematics, curvature is any of several strongly related concepts in geometry. Intuitively, the curvature is the amount by which a curve deviates from being a straight line, or a surface deviates from being a plane. For curves, the canon ...
of the relative entropy of a conditional distribution with respect to its parameters.


History

The Fisher information was discussed by several early statisticians, notably F. Y. Edgeworth. For example, Savage says: "In it isher information he isherwas to some extent anticipated (Edgeworth 1908–9 esp. 502, 507–8, 662, 677–8, 82–5 and references he dgeworthcites including Pearson and Filon 1898 . .." There are a number of early historical sources and a number of reviews of this early work.Hald (1998, 1999)


See also

*
Efficiency (statistics) In statistics, efficiency is a measure of quality of an estimator, of an experimental design, or of a hypothesis testing procedure. Essentially, a more efficient estimator, needs fewer input data or observations than a less efficient one to achi ...
*
Observed information In statistics, the observed information, or observed Fisher information, is the negative of the second derivative (the Hessian matrix) of the "log-likelihood" (the logarithm of the likelihood function). It is a sample-based version of the Fisher i ...
*
Fisher information metric In information geometry, the Fisher information metric is a particular Riemannian metric which can be defined on a smooth statistical manifold, ''i.e.'', a smooth manifold whose points are probability measures defined on a common probability spac ...
* Formation matrix *
Information geometry Information geometry is an interdisciplinary field that applies the techniques of differential geometry to study probability theory and statistics. It studies statistical manifolds, which are Riemannian manifolds whose points correspond to p ...
*
Jeffreys prior In Bayesian probability, the Jeffreys prior, named after Sir Harold Jeffreys, is a non-informative (objective) prior distribution for a parameter space; its density function is proportional to the square root of the determinant of the Fisher infor ...
*
Cramér–Rao bound In estimation theory and statistics, the Cramér–Rao bound (CRB) expresses a lower bound on the variance of unbiased estimators of a deterministic (fixed, though unknown) parameter, the variance of any such estimator is at least as high as the in ...
* Minimum Fisher information *
Quantum Fisher information The quantum Fisher information is a central quantity in quantum metrology and is the quantum analogue of the classical Fisher information. The quantum Fisher information F_ varrho,A of a state \varrho with respect to the observable A is defined as ...
Other measures employed in
information theory Information theory is the scientific study of the quantification, storage, and communication of information. The field was originally established by the works of Harry Nyquist and Ralph Hartley, in the 1920s, and Claude Shannon in the 1940 ...
: *
Entropy (information theory) In information theory, the entropy of a random variable is the average level of "information", "surprise", or "uncertainty" inherent to the variable's possible outcomes. Given a discrete random variable X, which takes values in the alphabet \ ...
* Kullback–Leibler divergence * Self-information


Notes


References

* * * * * * Frieden, B. R. (2004) ''Science from Fisher Information: A Unification''. Cambridge Univ. Press. . * * * * * * * * * * * * * {{DEFAULTSORT:Fisher Information Estimation theory Information theory Design of experiments