In
mathematics, Monte Carlo integration is a technique for
numerical integration
In analysis, numerical integration comprises a broad family of algorithms for calculating the numerical value of a definite integral, and by extension, the term is also sometimes used to describe the numerical solution of differential equations ...
using
random numbers. It is a particular
Monte Carlo method
Monte Carlo methods, or Monte Carlo experiments, are a broad class of computational algorithms that rely on repeated random sampling to obtain numerical results. The underlying concept is to use randomness to solve problems that might be deter ...
that numerically computes a
definite integral
In mathematics, an integral assigns numbers to functions in a way that describes displacement, area, volume, and other concepts that arise by combining infinitesimal data. The process of finding integrals is called integration. Along with ...
. While other algorithms usually evaluate the integrand at a regular grid, Monte Carlo randomly chooses points at which the integrand is evaluated. This method is particularly useful for higher-dimensional integrals.
[
There are different methods to perform a Monte Carlo integration, such as uniform sampling, stratified sampling, ]importance sampling
Importance sampling is a Monte Carlo method for evaluating properties of a particular distribution, while only having samples generated from a different distribution than the distribution of interest. Its introduction in statistics is generally att ...
, sequential Monte Carlo (also known as a particle filter), and mean-field particle methods.
Overview
In numerical integration, methods such as the trapezoidal rule
In calculus, the trapezoidal rule (also known as the trapezoid rule or trapezium rule; see Trapezoid for more information on terminology) is a technique for approximating the definite integral.
\int_a^b f(x) \, dx.
The trapezoidal rule works b ...
use a deterministic approach. Monte Carlo integration, on the other hand, employs a non-deterministic approach: each realization provides a different outcome. In Monte Carlo, the final outcome is an approximation of the correct value with respective error bars, and the correct value is likely to be within those error bars.
The problem Monte Carlo integration addresses is the computation of a multidimensional definite integral
:
where Ω, a subset of R''m'', has volume
:
The naive Monte Carlo approach is to sample points uniformly on Ω:[Newman, 1999, Chap. 1.] given ''N'' uniform samples,
:
''I'' can be approximated by
:.
This is because the law of large numbers
In probability theory, the law of large numbers (LLN) is a theorem that describes the result of performing the same experiment a large number of times. According to the law, the average of the results obtained from a large number of trials sho ...
ensures that
:.
Given the estimation of ''I'' from ''QN'', the error bars of ''QN'' can be estimated by the sample variance
In probability theory and statistics, variance is the expectation of the squared deviation of a random variable from its population mean or sample mean. Variance is a measure of dispersion, meaning it is a measure of how far a set of number ...
using the unbiased estimate of the variance.
:
which leads to
:.
As long as the sequence
:
is bounded, this variance decreases asymptotically to zero as 1/''N''. The estimation of the error of ''QN'' is thus
:
which decreases as . This is standard error of the mean
The standard error (SE) of a statistic (usually an estimate of a parameter) is the standard deviation of its sampling distribution or an estimate of that standard deviation. If the statistic is the sample mean, it is called the standard error ...
multiplied with .
This result does not depend on the number of dimensions of the integral, which is the promised advantage of Monte Carlo integration against most deterministic methods that depend exponentially on the dimension.
It is important to notice that, unlike in deterministic methods, the estimate of the error is not a strict error bound; random sampling may not uncover all the important features of the integrand that can result in an underestimate of the error.
While the naive Monte Carlo works for simple examples, an improvement over deterministic algorithms can only be accomplished with algorithms that use problem-specific sampling distributions.
With an appropriate sample distribution it is possible to exploit the fact that almost all higher-dimensional integrands are very localized and only small subspace notably contributes to the integral.
A large part of the Monte Carlo literature is dedicated in developing strategies to improve the error estimates. In particular, stratified sampling—dividing the region in sub-domains—and importance sampling—sampling from non-uniform distributions—are two examples of such techniques.
Example
A paradigmatic example of a Monte Carlo integration is the estimation of π. Consider the function
:
and the set Ω = ��1,1× ��1,1with ''V'' = 4. Notice that
:
Thus, a crude way of calculating the value of π with Monte Carlo integration is to pick ''N'' random numbers on Ω and compute
:
In the figure on the right, the relative error is measured as a function of ''N'', confirming the .
C example
Keep in mind that a true random number generator should be used.
int i, throws = 99999, insideCircle = 0;
double randX, randY, pi;
srand(time(NULL));
for (i = 0; i < throws; ++i)
pi = 4.0 * insideCircle / throws;
Wolfram Mathematica example
The code below describes a process of integrating the function
:
from