HOME

TheInfoList



OR:

Bayesian quadrature is a method for approximating intractable integration problems. It falls within the class of probabilistic numerical methods. Bayesian quadrature views numerical integration as a Bayesian inference task, where function evaluations are used to estimate the integral of that function. For this reason, it is sometimes also referred to as "Bayesian probabilistic numerical integration" or "Bayesian numerical integration". The name "Bayesian cubature" is also sometimes used when the integrand is multi-dimensional. A potential advantage of this approach is that it provides probabilistic uncertainty quantification for the value of the integral.


Bayesian quadrature


Numerical integration

Let f:\mathcal \rightarrow \mathbb be a function defined on a domain \mathcal (where typically \mathcal\subseteq \mathbb^d). In
numerical integration In analysis, numerical integration comprises a broad family of algorithms for calculating the numerical value of a definite integral, and by extension, the term is also sometimes used to describe the numerical solution of differential equations ...
, function evaluations f(x_1), \ldots, f(x_n) at distinct locations x_1, \ldots, x_n in \mathcal are used to estimate the
integral In mathematics, an integral assigns numbers to functions in a way that describes displacement, area, volume, and other concepts that arise by combining infinitesimal data. The process of finding integrals is called integration. Along with ...
of f against a measure \nu : i.e. \textstyle \nu := \int_ f(x) \nu(\mathrmx). Given weights w_1, \ldots, w_n \in \mathbb, a quadrature rule is an estimator of \nu /math> of the form \textstyle \hat := \sum_^n w_i f(x_i). Bayesian quadrature consists of specifying a prior distribution over f, conditioning this prior on f(x_1), \ldots, f(x_n) to obtain a posterior distribution f, then computing the implied posterior distribution on \nu . The name "quadrature" comes from the fact that the posterior mean on \nu sometimes takes the form of a quadrature rule whose weights are determined by the choice of prior.


Bayesian quadrature with Gaussian processes

The most common choice of
prior distribution In Bayesian statistical inference, a prior probability distribution, often simply called the prior, of an uncertain quantity is the probability distribution that would express one's beliefs about this quantity before some evidence is taken int ...
for f is a
Gaussian process In probability theory and statistics, a Gaussian process is a stochastic process (a collection of random variables indexed by time or space), such that every finite collection of those random variables has a multivariate normal distribution, i.e. ...
as this permits conjugate inference to obtain a closed-form posterior distribution on \nu . Suppose we have a Gaussian process with prior mean function m: \mathcal \rightarrow \mathbb and
covariance function In probability theory and statistics, the covariance function describes how much two random variables change together (their ''covariance'') with varying spatial or temporal separation. For a random field or stochastic process ''Z''(''x'') on a d ...
(or kernel function) k: \mathcal \times \mathcal \rightarrow \mathbb . Then, the posterior distribution on f is a Gaussian process with mean m_n:\mathcal \rightarrow \mathbb and kernel k_n:\mathcal \times \mathcal \rightarrow \mathbb given by: m_n(x) = m(x) + k(x,X)k(X,X)^ f(X) \qquad \text \qquad k_n(x,y) = k(x,y)-k(x,X)k(X,X)^k(X,y). where (k(X,X))_ = k(x_i,x_j), (f(X))_ = f(x_i), (k(\cdot,X))_i = k(\cdot,x_i) and (k(X,\cdot))_i = k(x_i,\cdot). Furthermore, the posterior distribution on \nu is a univariate Gaussian distribution with mean \mathbb nu[f and variance \mathbb nu[f given by \mathbb nu[f = \nu[m]+ \nu (\cdot,X)(X,X)^ f(X) \qquad \text \qquad \mathbb nu[f = \nu\nu[k]-\nu (\cdot,X)(X,X)^\nu[k(X,\cdot)]. The function \textstyle \nu (\cdot, x)= \int_\mathcal k(y, x) \nu(\mathrm y) is the kernel mean embedding of k and \textstyle \nu\nu = \int_\mathcal k(x, y) \nu(dx) \nu(\mathrmy) denotes the integral of k with respect to both inputs. In particular, note that the posterior mean is a quadrature rule with weights \textstyle w_i = (\nu (\cdot,X)(X,X)^)_i. and the posterior variance provides a quantification of the user's uncertainty over the value of \nu . In more challenging integration problems, where the prior distribution cannot be relied upon as a meaningful representation of epistemic uncertainty, it is necessary to use the data f(x_1), \ldots, f(x_n) to set the kernel hyperparameters using, for example,
maximum likelihood estimation In statistics, maximum likelihood estimation (MLE) is a method of estimating the parameters of an assumed probability distribution, given some observed data. This is achieved by maximizing a likelihood function so that, under the assumed stati ...
. The estimation of kernel hyperparameters introduces adaptivity into Bayesian quadrature.


Example

Consider estimation of the integral \nu = \int_0^1 f(x) \, \mathrmx \approx 1.79 \quad \text \quad f(x) = (1 + x^2) \sin(5 \pi x) + \frac using a Bayesian quadrature rule based on a zero-mean Gaussian process prior with the Matérn covariance function of smoothness 3/2 and correlation length \rho = 1/5. This covariance function is \textstyle k(x, y) = (1 + \sqrt \, , x - y, / \rho ) \exp( \! - \sqrt \, , x - y, /\rho ). It is straightforward (though tedious) to compute that \nu (\cdot, x)= \int_0^1 k(y, x) \,\mathrmy = \frac - \frac \exp\bigg(\frac\bigg) \big(3+2\sqrt\,\rho-3x\big)-\frac \exp\bigg(-\frac\bigg)\big(3x+2\sqrt\,\rho\big) \nu\nu = \int_0^1 \int_0^1 k(x, y) \,\mathrm x \,\mathrm y = \frac \Bigg 2\sqrt - 3\rho + \exp\bigg(\!-\frac\bigg) \big( \sqrt + 3\rho \big) \Bigg Convergence of the Bayesian quadrature point estimate \mathbb nu[f and concentration of the posterior mass, as quantified by \mathbb nu[f, around the true integral \nu /math> as f is evaluated at more and more points is displayed in the accompanying animation.


Advantages and disadvantages

Since Bayesian quadrature is an example of probabilistic numerics, it inherits certain advantages compared with traditional Numerical integration, numerical integration methods: * It allows uncertainty to be quantified and propagated through all subsequent computations to explicitly model the impact of numerical error. * It provides a principled way to incorporate prior knowledge by using a judicious choice of prior distributions for f, which may be more sophisticated compared to the standard Gaussian process just described. * It permits more efficient use of information, e.g. jointly inferring multiple related quantities of interest or using active learning to reduce the required number of points. Despite these merits, Bayesian quadrature methods possess the following limitations: * Although the Bayesian paradigm allows a principled treatment of the quantification of uncertainty, posterior inference over \nu /math> is not always tractable, thus requiring a second-level estimation. E.g. for Bayesian quadrature with Gaussian processes, the kernel mean embedding \nu (\cdot, x)/math> has ''no'' closed-form expression for a general kernel k and measure \nu. * The computational cost of Bayesian quadrature methods based on Gaussian processes is in general \mathcal(n^3) due to the cost of inverting n \times n matrices, which may defy their applications to large-scale problems.


Algorithmic design


Prior distributions

The most commonly used prior for f is a Gaussian process prior. This is mainly due to the advantage provided by Gaussian conjugacy and the fact that Gaussian processes can encode a wide range of prior knowledge including smoothness, periodicity and sparsity through a careful choice of prior covariance. However, a number of other prior distributions have also been proposed. This includes multi-output Gaussian processes, which are particularly useful when tackling multiple related numerical integration tasks simultaneously or sequentially, and tree-based priors such as Bayesian additive regression trees, which are well suited for discontinuous f . Additionally, Dirichlet processes priors have also been proposed for the integration measure \nu .


Point selection

The points x_1, \ldots, x_n are either considered to be given, or can be selected so as to ensure the posterior on \nu /math> concentrates at a faster rate. One approach consists of using point sets from other quadrature rules. For example, taking independent and identically distributed realisations from \nu recovers a Bayesian approach to
Monte Carlo Monte Carlo (; ; french: Monte-Carlo , or colloquially ''Monte-Carl'' ; lij, Munte Carlu ; ) is officially an administrative area of the Principality of Monaco, specifically the ward of Monte Carlo/Spélugues, where the Monte Carlo Casino i ...
, whereas using certain deterministic point sets such as low-discrepancy sequences or lattices recovers a Bayesian alternative to quasi-Monte Carlo. It is of course also possible to use point sets specifically designed for Bayesian quadrature; see for example the work of who exploited symmetries in point sets to obtain scalable Bayesian quadrature estimators. Alternatively, points can also be selected adaptively following principles from
active learning Active learning is "a method of learning in which students are actively or experientially involved in the learning process and where there are different levels of active learning, depending on student involvement." states that "students partici ...
and
Bayesian experimental design Bayesian experimental design provides a general probability-theoretical framework from which other theories on experimental design can be derived. It is based on Bayesian inference to interpret the observations/data acquired during the experiment. ...
so as to directly minimise posterior uncertainty, including for multi-output Gaussian processes.


Kernel mean and initial error

One of the challenges when implementing Bayesian quadrature is the need to evaluate the function \nu (\cdot,x) and the constant \nu\nu . The former is commonly called the kernel mean, and is a quantity which is key to the computation of kernel-based distances such as the maximum mean discrepancy. The latter is commonly called the initial error since it provides an upper bound on the integration error before any function values are observed. Unfortunately, the kernel mean and initial error can only be computed for a small number of (k, \nu) pairs; see for example Table 1 in.


Theory

There have been a number of theoretical guarantees derived for Bayesian quadrature. These usually require Sobolev smoothness properties of the integrand, although recent work also extends to integrands in the reproducing kernel Hilbert space of the Gaussian kernel. Most of the results apply to the case of Monte Carlo or deterministic grid point sets, but some results also extend to adaptive designs.


Software


ProbNum
Probabilistic numerical methods in Python, including a Bayesian quadrature implementation.
Emukit
Emulation and decision making under uncertainty in Python.
QMCPy
Bayesian quadrature with QMC point sets in Python.


References

Numerical integration (quadrature) Bayesian inference {{improve categories, date=January 2022