In
mathematical statistics
Mathematical statistics is the application of probability theory, a branch of mathematics, to statistics, as opposed to techniques for collecting statistical data. Specific mathematical techniques which are used for this include mathematical ...
, the Fisher information (sometimes simply called information) is a way of measuring the amount of
information
Information is an abstract concept that refers to that which has the power to inform. At the most fundamental level information pertains to the interpretation of that which may be sensed. Any natural process that is not completely random, ...
that an observable
random variable
A random variable (also called random quantity, aleatory variable, or stochastic variable) is a mathematical formalization of a quantity or object which depends on random events. It is a mapping or a function from possible outcomes (e.g., the p ...
''X'' carries about an unknown parameter ''θ'' of a distribution that models ''X''. Formally, it is the
variance
In probability theory and statistics, variance is the expectation of the squared deviation of a random variable from its population mean or sample mean. Variance is a measure of dispersion, meaning it is a measure of how far a set of number ...
of the
score
Score or scorer may refer to:
*Test score, the result of an exam or test
Business
* Score Digital, now part of Bauer Radio
* Score Entertainment, a former American trading card design and manufacturing company
* Score Media, a former Canadian m ...
, or the
expected value
In probability theory, the expected value (also called expectation, expectancy, mathematical expectation, mean, average, or first moment) is a generalization of the weighted average. Informally, the expected value is the arithmetic mean of a ...
of the
observed information.
In
Bayesian statistics
Bayesian statistics is a theory in the field of statistics based on the Bayesian interpretation of probability where probability expresses a ''degree of belief'' in an event. The degree of belief may be based on prior knowledge about the event, ...
, the
asymptotic distribution of the
posterior mode depends on the Fisher information and not on the
prior (according to the
Bernstein–von Mises theorem, which was anticipated by
Laplace for
exponential families). The role of the Fisher information in the asymptotic theory of
maximum-likelihood estimation was emphasized by the statistician
Ronald Fisher
Sir Ronald Aylmer Fisher (17 February 1890 – 29 July 1962) was a British polymath who was active as a mathematician, statistician, biologist, geneticist, and academic. For his work in statistics, he has been described as "a genius who ...
(following some initial results by
Francis Ysidro Edgeworth). The Fisher information is also used in the calculation of the
Jeffreys prior, which is used in Bayesian statistics.
The Fisher information matrix is used to calculate the
covariance matrices associated with
maximum-likelihood estimates. It can also be used in the formulation of test statistics, such as the
Wald test
In statistics, the Wald test (named after Abraham Wald) assesses constraints on statistical parameters based on the weighted distance between the unrestricted estimate and its hypothesized value under the null hypothesis, where the weight is th ...
.
Statistical systems of a scientific nature (physical, biological, etc.) whose likelihood functions obey
shift invariance A shift invariant system is the discrete equivalent of a time-invariant system, defined such that if y(n) is the response of the system to x(n), then y(n-k) is the response of the system to x(n-k).Oppenheim, Schafer, 12 That is, in a shift-invariant ...
have been shown to obey maximum Fisher information. The level of the maximum depends upon the nature of the system constraints.
Definition
The Fisher information is a way of measuring the amount of information that an observable
random variable
A random variable (also called random quantity, aleatory variable, or stochastic variable) is a mathematical formalization of a quantity or object which depends on random events. It is a mapping or a function from possible outcomes (e.g., the p ...
carries about an unknown
parameter
A parameter (), generally, is any characteristic that can help in defining or classifying a particular system (meaning an event, project, object, situation, etc.). That is, a parameter is an element of a system that is useful, or critical, when ...
upon which the probability of
depends. Let
be the
probability density function
In probability theory, a probability density function (PDF), or density of a continuous random variable, is a function whose value at any given sample (or point) in the sample space (the set of possible values taken by the random variable) c ...
(or
probability mass function) for
conditioned on the value of
. It describes the probability that we observe a given outcome of
, ''given'' a known value of
. If
is sharply peaked with respect to changes in
, it is easy to indicate the "correct" value of
from the data, or equivalently, that the data
provides a lot of information about the parameter
. If
is flat and spread-out, then it would take many samples of
to estimate the actual "true" value of
that ''would'' be obtained using the entire population being sampled. This suggests studying some kind of variance with respect to
.
Formally, the
partial derivative
In mathematics, a partial derivative of a function of several variables is its derivative with respect to one of those variables, with the others held constant (as opposed to the total derivative, in which all variables are allowed to vary). Pa ...
with respect to
of the
natural logarithm
The natural logarithm of a number is its logarithm to the base of the mathematical constant , which is an irrational and transcendental number approximately equal to . The natural logarithm of is generally written as , , or sometimes, if ...
of the
likelihood function
The likelihood function (often simply called the likelihood) represents the probability of random variable realizations conditional on particular values of the statistical parameters. Thus, when evaluated on a given sample, the likelihood funct ...
is called the ''
score
Score or scorer may refer to:
*Test score, the result of an exam or test
Business
* Score Digital, now part of Bauer Radio
* Score Entertainment, a former American trading card design and manufacturing company
* Score Media, a former Canadian m ...
''. Under certain regularity conditions, if
is the true parameter (i.e.
is actually distributed as
), it can be shown that the
expected value
In probability theory, the expected value (also called expectation, expectancy, mathematical expectation, mean, average, or first moment) is a generalization of the weighted average. Informally, the expected value is the arithmetic mean of a ...
(the first
moment
Moment or Moments may refer to:
* Present time
Music
* The Moments, American R&B vocal group Albums
* ''Moment'' (Dark Tranquillity album), 2020
* ''Moment'' (Speed album), 1998
* ''Moments'' (Darude album)
* ''Moments'' (Christine Guldbrand ...
) of the score, evaluated at the true parameter value
, is 0:
:
The Fisher information is defined to be the
variance
In probability theory and statistics, variance is the expectation of the squared deviation of a random variable from its population mean or sample mean. Variance is a measure of dispersion, meaning it is a measure of how far a set of number ...
of the score:
:
Note that
. A random variable carrying high Fisher information implies that the absolute value of the score is often high. The Fisher information is not a function of a particular observation, as the random variable ''X'' has been averaged out.
If is twice differentiable with respect to ''θ'', and under certain regularity conditions, then the Fisher information may also be written as
:
since
:
and
:
Thus, the Fisher information may be seen as the curvature of the
support curve
The likelihood function (often simply called the likelihood) represents the probability of random variable realizations conditional on particular values of the statistical parameters. Thus, when evaluated on a given sample, the likelihood functi ...
(the graph of the log-likelihood). Near the
maximum likelihood
In statistics, maximum likelihood estimation (MLE) is a method of estimating the parameters of an assumed probability distribution, given some observed data. This is achieved by maximizing a likelihood function so that, under the assumed sta ...
estimate, low Fisher information therefore indicates that the maximum appears "blunt", that is, the maximum is shallow and there are many nearby values with a similar log-likelihood. Conversely, high Fisher information indicates that the maximum is sharp.
Regularity conditions
The regularity conditions are as follows:
# The partial derivative of ''f''(''X''; ''θ'') with respect to ''θ'' exists
almost everywhere. (It can fail to exist on a null set, as long as this set does not depend on ''θ''.)
# The integral of ''f''(''X''; ''θ'') can be differentiated under the integral sign with respect to ''θ''.
# The
support of ''f''(''X''; ''θ'') does not depend on ''θ''.
If ''θ'' is a vector then the regularity conditions must hold for every component of ''θ''. It is easy to find an example of a density that does not satisfy the regularity conditions: The density of a Uniform(0, ''θ'') variable fails to satisfy conditions 1 and 3. In this case, even though the Fisher information can be computed from the definition, it will not have the properties it is typically assumed to have.
In terms of likelihood
Because the
likelihood
The likelihood function (often simply called the likelihood) represents the probability of random variable realizations conditional on particular values of the statistical parameters. Thus, when evaluated on a given sample, the likelihood funct ...
of ''θ'' given ''X'' is always proportional to the probability ''f''(''X''; ''θ''), their logarithms necessarily differ by a constant that is independent of ''θ'', and the derivatives of these logarithms with respect to ''θ'' are necessarily equal. Thus one can substitute in a log-likelihood ''l''(''θ''; ''X'') instead of in the definitions of Fisher Information.
Samples of any size
The value ''X'' can represent a single sample drawn from a single distribution or can represent a collection of samples drawn from a collection of distributions. If there are ''n'' samples and the corresponding ''n'' distributions are
statistically independent then the Fisher information will necessarily be the sum of the single-sample Fisher information values, one for each single sample from its distribution. In particular, if the ''n'' distributions are
independent and identically distributed then the Fisher information will necessarily be ''n'' times the Fisher information of a single sample from the common distribution.
Informal derivation of the Cramér–Rao bound
The
Cramér–Rao bound states that the inverse of the Fisher information is a lower bound on the variance of any
unbiased estimator of ''θ''. H.L. Van Trees (1968) and
B. Roy Frieden
Bernard Roy Frieden (born September 10, 1936) is an American mathematical physicist.
Frieden obtained a Ph.D. in Optics from The Institute of Optics at the University of Rochester. His doctoral thesis advisor was Robert E. Hopkins. Frieden is now ...
(2004) provide the following method of deriving the
Cramér–Rao bound, a result which describes use of the Fisher information.
Informally, we begin by considering an
unbiased estimator . Mathematically, "unbiased" means that
:
This expression is zero independent of ''θ'', so its partial derivative with respect to ''θ'' must also be zero. By the
product rule, this partial derivative is also equal to
:
For each ''θ'', the likelihood function is a probability density function, and therefore
. By using the
chain rule
In calculus, the chain rule is a formula that expresses the derivative of the Function composition, composition of two differentiable functions and in terms of the derivatives of and . More precisely, if h=f\circ g is the function such that h(x) ...
on the partial derivative of
and then dividing and multiplying by
, one can verify that
:
Using these two facts in the above, we get
:
Factoring the integrand gives
:
Squaring the expression in the integral, the
Cauchy–Schwarz inequality
The Cauchy–Schwarz inequality (also called Cauchy–Bunyakovsky–Schwarz inequality) is considered one of the most important and widely used inequalities in mathematics.
The inequality for sums was published by . The corresponding inequality f ...
yields
:
The second bracketed factor is defined to be the Fisher Information, while the first bracketed factor is the expected mean-squared error of the estimator
. By rearranging, the inequality tells us that
:
In other words, the precision to which we can estimate ''θ'' is fundamentally limited by the Fisher information of the likelihood function.
Single-parameter Bernoulli experiment
A
Bernoulli trial
In the theory of probability and statistics, a Bernoulli trial (or binomial trial) is a random experiment with exactly two possible outcomes, "success" and "failure", in which the probability of success is the same every time the experiment is ...
is a random variable with two possible outcomes, "success" and "failure", with success having a probability of ''θ''. The outcome can be thought of as determined by a coin toss, with the probability of heads being ''θ'' and the probability of tails being .
Let ''X'' be a Bernoulli trial. The Fisher information contained in ''X'' may be calculated to be
:
Because Fisher information is additive, the Fisher information contained in ''n'' independent
Bernoulli trial
In the theory of probability and statistics, a Bernoulli trial (or binomial trial) is a random experiment with exactly two possible outcomes, "success" and "failure", in which the probability of success is the same every time the experiment is ...
s is therefore
:
This is the reciprocal of the
variance
In probability theory and statistics, variance is the expectation of the squared deviation of a random variable from its population mean or sample mean. Variance is a measure of dispersion, meaning it is a measure of how far a set of number ...
of the mean number of successes in ''n''
Bernoulli trial
In the theory of probability and statistics, a Bernoulli trial (or binomial trial) is a random experiment with exactly two possible outcomes, "success" and "failure", in which the probability of success is the same every time the experiment is ...
s, so in this case, the Cramér–Rao bound is an equality.
Matrix form
When there are ''N'' parameters, so that ''θ'' is an
vector then the Fisher information takes the form of an
matrix
Matrix most commonly refers to:
* ''The Matrix'' (franchise), an American media franchise
** '' The Matrix'', a 1999 science-fiction action film
** "The Matrix", a fictional setting, a virtual reality environment, within ''The Matrix'' (franchi ...
. This matrix is called the Fisher information matrix (FIM) and has typical element
:
The FIM is a
positive semidefinite matrix. If it is positive definite, then it defines a
Riemannian metric on the ''N''-
dimension
In physics and mathematics, the dimension of a mathematical space (or object) is informally defined as the minimum number of coordinates needed to specify any point within it. Thus, a line has a dimension of one (1D) because only one coor ...
al
parameter space. The topic
information geometry uses this to connect Fisher information to
differential geometry, and in that context, this metric is known as the
Fisher information metric.
Under certain regularity conditions, the Fisher information matrix may also be written as
:
The result is interesting in several ways:
*It can be derived as the
Hessian of the
relative entropy.
*It can be used as a Riemannian metric for defining Fisher-Rao geometry when it is positive-definite.
*It can be understood as a metric induced from the
Euclidean metric, after appropriate change of variable.
*In its complex-valued form, it is the
Fubini–Study metric.
*It is the key part of the proof of
Wilks' theorem, which allows confidence region estimates for
maximum likelihood estimation (for those conditions for which it applies) without needing the
Likelihood Principle.
*In cases where the analytical calculations of the FIM above are difficult, it is possible to form an average of easy Monte Carlo estimates of the
Hessian of the negative log-likelihood function as an estimate of the FIM. The estimates may be based on values of the negative log-likelihood function or the gradient of the negative log-likelihood function; no analytical calculation of the Hessian of the negative log-likelihood function is needed.
Orthogonal parameters
We say that two parameters ''θ
i'' and ''θ
j'' are orthogonal if the element of the ''i''th row and ''j''th column of the Fisher information matrix is zero. Orthogonal parameters are easy to deal with in the sense that their
maximum likelihood estimates are independent and can be calculated separately. When dealing with research problems, it is very common for the researcher to invest some time searching for an orthogonal parametrization of the densities involved in the problem.
Singular statistical model
If the Fisher information matrix is positive definite for all , then the corresponding
statistical model
A statistical model is a mathematical model that embodies a set of statistical assumptions concerning the generation of sample data (and similar data from a larger population). A statistical model represents, often in considerably idealized form, ...
is said to be ''regular''; otherwise, the statistical model is said to be ''singular''. Examples of singular statistical models include the following: normal mixtures, binomial mixtures, multinomial mixtures, Bayesian networks, neural networks, radial basis functions, hidden Markov models, stochastic context-free grammars, reduced rank regressions, Boltzmann machines.
In
machine learning
Machine learning (ML) is a field of inquiry devoted to understanding and building methods that 'learn', that is, methods that leverage data to improve performance on some set of tasks. It is seen as a part of artificial intelligence.
Machine ...
, if a statistical model is devised so that it extracts hidden structure from a random phenomenon, then it naturally becomes singular.
Multivariate normal distribution
The FIM for a ''N''-variate
multivariate normal distribution
In probability theory and statistics, the multivariate normal distribution, multivariate Gaussian distribution, or joint normal distribution is a generalization of the one-dimensional ( univariate) normal distribution to higher dimensions. One ...
,
has a special form. Let the ''K''-dimensional vector of parameters be
and the vector of random normal variables be
. Assume that the mean values of these random variables are
, and let
be the
covariance matrix. Then, for
, the (''m'', ''n'') entry of the FIM is:
:
where
denotes the
transpose
In linear algebra, the transpose of a matrix is an operator which flips a matrix over its diagonal;
that is, it switches the row and column indices of the matrix by producing another matrix, often denoted by (among other notations).
The tr ...
of a vector,
denotes the
trace of a
square matrix, and:
:
Note that a special, but very common, case is the one where
, a constant. Then
:
In this case the Fisher information matrix may be identified with the coefficient matrix of the
normal equations
In statistics, ordinary least squares (OLS) is a type of linear least squares method for choosing the unknown parameters in a linear regression model (with fixed level-one effects of a linear function of a set of explanatory variables) by the ...
of
least squares
The method of least squares is a standard approach in regression analysis to approximate the solution of overdetermined systems (sets of equations in which there are more equations than unknowns) by minimizing the sum of the squares of the r ...
estimation theory.
Another special case occurs when the mean and covariance depend on two different vector parameters, say, ''β'' and ''θ''. This is especially popular in the analysis of spatial data, which often uses a linear model with correlated residuals. In this case,
:
where
:
Properties
Chain rule
Similar to the
entropy
Entropy is a scientific concept, as well as a measurable physical property, that is most commonly associated with a state of disorder, randomness, or uncertainty. The term and the concept are used in diverse fields, from classical thermodyna ...
or
mutual information
In probability theory and information theory, the mutual information (MI) of two random variables is a measure of the mutual dependence between the two variables. More specifically, it quantifies the " amount of information" (in units such as ...
, the Fisher information also possesses a chain rule decomposition. In particular, if ''X'' and ''Y'' are jointly distributed random variables, it follows that:
:
where
and
is the Fisher information of ''Y'' relative to
calculated with respect to the conditional density of ''Y'' given a specific value ''X'' = ''x''.
As a special case, if the two random variables are
independent
Independent or Independents may refer to:
Arts, entertainment, and media Artist groups
* Independents (artist group), a group of modernist painters based in the New Hope, Pennsylvania, area of the United States during the early 1930s
* Independe ...
, the information yielded by the two random variables is the sum of the information from each random variable separately:
:
Consequently, the information in a random sample of ''n''
independent and identically distributed observations is ''n'' times the information in a sample of size 1.
F-divergence
Given a convex function