Integrated nested Laplace approximations (INLA) is a method for approximate
Bayesian inference
Bayesian inference is a method of statistical inference in which Bayes' theorem is used to update the probability for a hypothesis as more evidence or information becomes available. Bayesian inference is an important technique in statistics, a ...
based on
Laplace's method
In mathematics, Laplace's method, named after Pierre-Simon Laplace, is a technique used to approximate integrals of the form
:\int_a^b e^ \, dx,
where f(x) is a twice- differentiable function, ''M'' is a large number, and the endpoints ''a'' ...
.
It is designed for a class of models called latent Gaussian models (LGMs), for which it can be a fast and accurate alternative for
Markov chain Monte Carlo methods to compute posterior marginal distributions. Due to its relative speed even with large data sets for certain problems and models, INLA has been a popular inference method in applied statistics, in particular
spatial statistics,
ecology, and
epidemiology. It is also possible to combine INLA with a
finite element method solution of a
stochastic partial differential equation
Stochastic partial differential equations (SPDEs) generalize partial differential equations via random force terms and coefficients, in the same way ordinary stochastic differential equations generalize ordinary differential equations.
They have ...
to study e.g. spatial point processes and
species distribution models. The INLA method is implemented in the R-INLA
R package.
Latent Gaussian models
Let
denote the response variable (that is, the observations) which belongs to an
exponential family, with the mean
(of
) being linked to a
linear predictor via an appropriate
link function. The linear predictor can take the form of a (Bayesian) additive model. All latent effects (the linear predictor, the intercept, coefficients of possible covariates, and so on) are collectively denoted by the vector
. The
hyperparameters
In Bayesian statistics, a hyperparameter is a parameter of a prior distribution; the term is used to distinguish them from parameters of the model for the underlying system under analysis.
For example, if one is using a beta distribution to mo ...
of the model are denoted by
. As per Bayesian statistics,
and
are random variables with prior distributions.
The observations are assumed to be conditionally independent given
and
:
where
is the set of indices for observed elements of
(some elements may be unobserved, and for these INLA computes a posterior predictive distribution). Note that the linear predictor
is part of
.
For the model to be a latent Gaussian model, it is assumed that
is a Gaussian Markov Random Field (GMRF)
(that is, a multivariate Gaussian with additional conditional independence properties) with probability density
where
is a
-dependent sparse
precision matrix and
is its determinant. The precision matrix is sparse due to the GMRF assumption. The prior distribution
for the hyperparameters need not be Gaussian. However, the number of hyperparameters,
, is assumed to be small (say, less than 15).
Approximate Bayesian inference with INLA
In Bayesian inference, one wants to solve for the
posterior distribution of the latent variables
and
. Applying
Bayes' theorem
In probability theory and statistics, Bayes' theorem (alternatively Bayes' law or Bayes' rule), named after Thomas Bayes, describes the probability of an event, based on prior knowledge of conditions that might be related to the event. For examp ...
the joint posterior distribution of
and
is given by
Obtaining the exact posterior is generally a very difficult problem. In INLA, the main aim is to approximate the posterior marginals
where
.
A key idea of INLA is to construct nested approximations given by
where
is an approximated posterior density. The approximation to the marginal density
is obtained in a nested fashion by first approximating
and
, and then numerically integrating out
as
where the summation is over the values of
, with integration weights given by
. The approximation of
is computed by numerically integrating
out from
.
To get the approximate distribution
, one can use the relation
as the starting point. Then
is obtained at a specific value of the hyperparameters
with the Laplace approximation
where
is the
Gaussian approximation to
whose
mode at a given
is
. The mode can be found numerically for example with the
Newton-Raphson method.
The trick in the Laplace approximation above is the fact that the Gaussian approximation is applied on the full conditional of
in the denominator since it is usually close to a Gaussian due to the GMRF property of
. Applying the approximation here improves the accuracy of the method, since the posterior
itself need not be close to a Gaussian, and so the Gaussian approximation is not directly applied on
. The second important property of a GMRF, the sparsity of the precision matrix
, is required for efficient computation of
for each value
.
Obtaining the approximate distribution
is more involved, and the INLA method provides three options for this: Gaussian approximation, Laplace approximation, or the simplified Laplace approximation.
For the numerical integration to obtain
, also three options are available: grid search, central composite design, or empirical Bayes.
References
Further reading
* {{cite book , first=Virgilio , last=Gomez-Rubio , title=Bayesian inference with INLA , location= , publisher=Chapman and Hall/CRC , year=2021 , isbn=978-1-03-217453-2
Computational statistics
Bayesian inference