HOME

TheInfoList



OR:

The Gaussian integral, also known as the Euler–Poisson integral, is the integral of the
Gaussian function In mathematics, a Gaussian function, often simply referred to as a Gaussian, is a function of the base form f(x) = \exp (-x^2) and with parametric extension f(x) = a \exp\left( -\frac \right) for arbitrary real constants , and non-zero . It is ...
f(x) = e^ over the entire real line. Named after the German mathematician
Carl Friedrich Gauss Johann Carl Friedrich Gauss (; german: Gauß ; la, Carolus Fridericus Gauss; 30 April 177723 February 1855) was a German mathematician and physicist who made significant contributions to many fields in mathematics and science. Sometimes refer ...
, the integral is \int_^\infty e^\,dx = \sqrt. Abraham de Moivre originally discovered this type of integral in 1733, while Gauss published the precise integral in 1809. The integral has a wide range of applications. For example, with a slight change of variables it is used to compute the
normalizing constant The concept of a normalizing constant arises in probability theory and a variety of other areas of mathematics. The normalizing constant is used to reduce any probability function to a probability density function with total probability of one. ...
of the normal distribution. The same integral with finite limits is closely related to both the error function and the cumulative distribution function of the normal distribution. In physics this type of integral appears frequently, for example, in
quantum mechanics Quantum mechanics is a fundamental theory in physics that provides a description of the physical properties of nature at the scale of atoms and subatomic particles. It is the foundation of all quantum physics including quantum chemistr ...
, to find the probability density of the ground state of the harmonic oscillator. This integral is also used in the path integral formulation, to find the propagator of the harmonic oscillator, and in statistical mechanics, to find its partition function. Although no
elementary function In mathematics, an elementary function is a function of a single variable (typically real or complex) that is defined as taking sums, products, roots and compositions of finitely many polynomial, rational, trigonometric, hyperbolic, and ...
exists for the error function, as can be proven by the Risch algorithm, the Gaussian integral can be solved analytically through the methods of
multivariable calculus Multivariable calculus (also known as multivariate calculus) is the extension of calculus in one variable to calculus with functions of several variables: the differentiation and integration of functions involving several variables, rather ...
. That is, there is no elementary ''
indefinite integral In calculus, an antiderivative, inverse derivative, primitive function, primitive integral or indefinite integral of a function is a differentiable function whose derivative is equal to the original function . This can be stated symbolicall ...
'' for \int e^\,dx, but the definite integral \int_^\infty e^\,dx can be evaluated. The definite integral of an arbitrary
Gaussian function In mathematics, a Gaussian function, often simply referred to as a Gaussian, is a function of the base form f(x) = \exp (-x^2) and with parametric extension f(x) = a \exp\left( -\frac \right) for arbitrary real constants , and non-zero . It is ...
is \int_^ e^\,dx= \sqrt.


Computation


By polar coordinates

A standard way to compute the Gaussian integral, the idea of which goes back to Poisson, is to make use of the property that: \left(\int_^ e^\,dx\right)^2 = \int_^ e^\,dx \int_^ e^\,dy = \int_^ \int_^ e^\, dx\,dy. Consider the function e^ = e^on the plane \mathbb^2, and compute its integral two ways: # on the one hand, by
double integration In mathematics (specifically multivariable calculus), a multiple integral is a definite integral of a function of several real variables, for instance, or . Integrals of a function of two variables over a region in \mathbb^2 (the real-number ...
in the Cartesian coordinate system, its integral is a square: \left(\int e^\,dx\right)^2; # on the other hand, by
shell integration Shell integration (the shell method in integral calculus) is a method for calculating the volume of a solid of revolution, when integrating along an axis ''perpendicular to'' the axis of revolution. This is in contrast to disc integration whi ...
(a case of double integration in
polar coordinates In mathematics, the polar coordinate system is a two-dimensional coordinate system in which each point on a plane is determined by a distance from a reference point and an angle from a reference direction. The reference point (analogous to th ...
), its integral is computed to be \pi Comparing these two computations yields the integral, though one should take care about the
improper integral In mathematical analysis, an improper integral is the limit of a definite integral as an endpoint of the interval(s) of integration approaches either a specified real number or positive or negative infinity; or in some instances as both endpoin ...
s involved. \begin \iint_ e^dx\,dy &= \int_0^ \int_0^ e^r\,dr\,d\theta\\ pt &= 2\pi \int_0^\infty re^\,dr\\ pt &= 2\pi \int_^0 \tfrac e^s\,ds && s = -r^2\\ pt &= \pi \int_^0 e^s\,ds \\ pt &= \pi \left(e^0 - e^\right) \\ pt &=\pi, \end where the factor of is the
Jacobian determinant In vector calculus, the Jacobian matrix (, ) of a vector-valued function of several variables is the matrix of all its first-order partial derivatives. When this matrix is square, that is, when the function takes the same number of variables ...
which appears because of the transform to polar coordinates ( is the standard measure on the plane, expressed in polar coordinates Wikibooks:Calculus/Polar Integration#Generalization), and the substitution involves taking , so . Combining these yields \left ( \int_^\infty e^\,dx \right )^2=\pi, so \int_^\infty e^ \, dx = \sqrt.


Complete proof

To justify the improper double integrals and equating the two expressions, we begin with an approximating function: I(a) = \int_^a e^dx. If the integral \int_^\infty e^ \, dx were
absolutely convergent In mathematics, an infinite series of numbers is said to converge absolutely (or to be absolutely convergent) if the sum of the absolute values of the summands is finite. More precisely, a real or complex series \textstyle\sum_^\infty a_n is s ...
we would have that its
Cauchy principal value In mathematics, the Cauchy principal value, named after Augustin Louis Cauchy, is a method for assigning values to certain improper integrals which would otherwise be undefined. Formulation Depending on the type of singularity in the integrand ...
, that is, the limit \lim_ I(a) would coincide with \int_^\infty e^\,dx. To see that this is the case, consider that \int_^\infty \left, e^\ dx < \int_^ -x e^\, dx + \int_^1 e^\, dx+ \int_^ x e^\, dx < \infty . So we can compute \int_^\infty e^ \, dx by just taking the limit \lim_ I(a). Taking the square of I(a) yields \begin I(a)^2 & = \left ( \int_^a e^\, dx \right ) \left ( \int_^a e^\, dy \right ) \\ pt& = \int_^a \left ( \int_^a e^\, dy \right )\,e^\, dx \\ pt& = \int_^a \int_^a e^\,dy\,dx. \end Using
Fubini's theorem In mathematical analysis Fubini's theorem is a result that gives conditions under which it is possible to compute a double integral by using an iterated integral, introduced by Guido Fubini in 1907. One may switch the order of integration if th ...
, the above double integral can be seen as an area integral \iint_ e^\,d(x,y), taken over a square with vertices on the ''xy''- plane. Since the exponential function is greater than 0 for all real numbers, it then follows that the integral taken over the square's
incircle In geometry, the incircle or inscribed circle of a triangle is the largest circle that can be contained in the triangle; it touches (is tangent to) the three sides. The center of the incircle is a triangle center called the triangle's incenter. ...
must be less than I(a)^2, and similarly the integral taken over the square's
circumcircle In geometry, the circumscribed circle or circumcircle of a polygon is a circle that passes through all the vertices of the polygon. The center of this circle is called the circumcenter and its radius is called the circumradius. Not every polyg ...
must be greater than I(a)^2. The integrals over the two disks can easily be computed by switching from Cartesian coordinates to
polar coordinates In mathematics, the polar coordinate system is a two-dimensional coordinate system in which each point on a plane is determined by a distance from a reference point and an angle from a reference direction. The reference point (analogous to th ...
: \begin x & = r \cos \theta \\ y & = r \sin\theta \end \mathbf J(r, \theta) = \begin \dfrac & \dfrac\\ em \dfrac & \dfrac \end = \begin \cos\theta & - r\sin \theta \\ \sin\theta & r\cos \theta \end d(x,y) = , J(r, \theta), d(r,\theta) = r\, d(r,\theta). \int_0^ \int_0^a re^ \, dr \, d\theta < I^2(a) < \int_0^ \int_0^ re^ \, dr\, d\theta. (See to polar coordinates from Cartesian coordinates for help with polar transformation.) Integrating, \pi \left(1-e^\right) < I^2(a) < \pi \left(1 - e^\right). By the
squeeze theorem In calculus, the squeeze theorem (also known as the sandwich theorem, among other names) is a theorem regarding the limit of a function that is trapped between two other functions. The squeeze theorem is used in calculus and mathematical anal ...
, this gives the Gaussian integral \int_^\infty e^\, dx = \sqrt.


By Cartesian coordinates

A different technique, which goes back to Laplace (1812), is the following. Let \begin y & = xs \\ dy & = x\,ds. \end Since the limits on as depend on the sign of , it simplifies the calculation to use the fact that is an even function, and, therefore, the integral over all real numbers is just twice the integral from zero to infinity. That is, \int_^ e^ \, dx = 2\int_^ e^\,dx. Thus, over the range of integration, , and the variables and have the same limits. This yields: \begin I^2 &= 4 \int_0^\infty \int_0^\infty e^ dy\,dx \\ pt&= 4 \int_0^\infty \left( \int_0^\infty e^ \, dy \right) \, dx \\ pt&= 4 \int_0^\infty \left( \int_0^\infty e^ x\,ds \right) \, dx \\ pt\end Then, using
Fubini's theorem In mathematical analysis Fubini's theorem is a result that gives conditions under which it is possible to compute a double integral by using an iterated integral, introduced by Guido Fubini in 1907. One may switch the order of integration if th ...
to switch the
order of integration In statistics, the order of integration, denoted ''I''(''d''), of a time series is a summary statistic, which reports the minimum number of differences required to obtain a covariance-stationary series. Integration of order ''d'' A time s ...
: \begin I^2 &= 4 \int_0^\infty \left( \int_0^\infty e^ x \, dx \right) \, ds \\ pt&= 4 \int_0^\infty \left \frac \right^ \, ds \\ pt&= 4 \left (\frac \int_0^\infty \frac \right) \\ pt&= 2 \arctan(s)\Big , _0^\infty \\ pt&= \pi. \end Therefore, I = \sqrt, as expected.


By Laplace's method

In Laplace approximation, we deal only with up to second-order terms in Taylor expansion, so we consider e^\approx 1-x^2 \approx (1+x^2)^. In fact, since (1+t)e^ \leq 1 for all t, we have the exact bounds:1-x^2 \leq e^ \leq (1+x^2)^Then we can do the bound at Laplace approximation limit:\int_(1-x^2)^n \leq \int_e^ dx \leq \int_(1+x^2)^ dx That is,2\sqrt n\int_(1-x^2)^n \leq \int_e^ dx \leq 2\sqrt n\int_(1+x^2)^ dx By trigonometric substitution, we exactly compute the two bounds: 2\sqrt n(2n)!!/(2n+1)!!, 2\sqrt n (\pi/2)(2n-3)!!/(2n-2)!! By the Wallis formula, the quotient of the two bounds converge to 1. By direct computation, the product of the two bounds converge to \pi.\frac \pi 2 = \prod_ \fracConversely, if we first compute the integral with one of the other methods above, we would obtain a proof of the Wallis formula.


Relation to the gamma function

The integrand is an even function, \int_^ e^ dx = 2 \int_0^\infty e^ dx Thus, after the change of variable x = \sqrt, this turns into the Euler integral 2 \int_0^\infty e^ dx=2\int_0^\infty \frac\ e^ \ t^ dt = \Gamma\left(\frac\right) = \sqrt where \Gamma(z) = \int_^ t^ e^ dt is the
gamma function In mathematics, the gamma function (represented by , the capital letter gamma from the Greek alphabet) is one commonly used extension of the factorial function to complex numbers. The gamma function is defined for all complex numbers except ...
. This shows why the factorial of a half-integer is a rational multiple of \sqrt \pi. More generally, \int_0^\infty x^n e^ dx = \frac, which can be obtained by substituting t=a x^b in the integrand of the gamma function to get \Gamma(z) = a^z b \int_0^ x^ e^ dx .


Generalizations


The integral of a Gaussian function

The integral of an arbitrary
Gaussian function In mathematics, a Gaussian function, often simply referred to as a Gaussian, is a function of the base form f(x) = \exp (-x^2) and with parametric extension f(x) = a \exp\left( -\frac \right) for arbitrary real constants , and non-zero . It is ...
is \int_^ e^\,dx= \sqrt. An alternative form is \int_^e^\,dx=\sqrt\,e^. This form is useful for calculating expectations of some continuous probability distributions related to the normal distribution, such as the
log-normal distribution In probability theory, a log-normal (or lognormal) distribution is a continuous probability distribution of a random variable whose logarithm is normally distributed. Thus, if the random variable is log-normally distributed, then has a norma ...
, for example.


''n''-dimensional and functional generalization

Suppose ''A'' is a symmetric positive-definite (hence invertible)
precision matrix In statistics, the precision matrix or concentration matrix is the matrix inverse of the covariance matrix or dispersion matrix, P = \Sigma^. For univariate distributions, the precision matrix degenerates into a scalar precision, defined as the ...
, which is the matrix inverse of the covariance matrix. Then, \int_^\infty \exp \, d^n x = \int_^\infty \exp \, d^n x = \sqrt =\sqrt =\sqrt where the integral is understood to be over R''n''. This fact is applied in the study of the
multivariate normal distribution In probability theory and statistics, the multivariate normal distribution, multivariate Gaussian distribution, or joint normal distribution is a generalization of the one-dimensional ( univariate) normal distribution to higher dimensions. One ...
. Also, \int x_\cdots x_ \, \exp \, d^nx =\sqrt \, \frac \, \sum_(A^)_ \cdots (A^)_ where ''σ'' is a permutation of and the extra factor on the right-hand side is the sum over all combinatorial pairings of of ''N'' copies of ''A''−1. Alternatively, \int f(\vec x) \exp d^nx=\sqrt \, \left. \exp f(\vec)\_ for some
analytic function In mathematics, an analytic function is a function that is locally given by a convergent power series. There exist both real analytic functions and complex analytic functions. Functions of each type are infinitely differentiable, but complex ...
''f'', provided it satisfies some appropriate bounds on its growth and some other technical criteria. (It works for some functions and fails for others. Polynomials are fine.) The exponential over a differential operator is understood as a power series. While functional integrals have no rigorous definition (or even a nonrigorous computational one in most cases), we can ''define'' a Gaussian functional integral in analogy to the finite-dimensional case. There is still the problem, though, that (2\pi)^\infty is infinite and also, the
functional determinant In functional analysis, a branch of mathematics, it is sometimes possible to generalize the notion of the determinant of a square matrix of finite order (representing a linear transformation from a finite-dimensional vector space to itself) to the i ...
would also be infinite in general. This can be taken care of if we only consider ratios: : \begin & \frac \\ pt= & \frac\sum_A^(x_,x_)\cdots A^(x_,x_). \end In the DeWitt notation, the equation looks identical to the finite-dimensional case.


''n''-dimensional with linear term

If A is again a symmetric positive-definite matrix, then (assuming all are column vectors) \int \exp\left(-\frac\sum_^A_ x_i x_j+\sum_^B_i x_i\right) d^n x =\int e^ d^n x = \sqrte^.


Integrals of similar form

\int_0^\infty x^ e^\,dx = \sqrt\frac \int_0^\infty x^ e^\,dx = \frac a^ \int_0^\infty x^e^\,dx = \frac \sqrt \int_0^\infty x^e^\,dx = \frac \int_0^\infty x^e^\,dx = \frac where n is a positive integer and !! denotes the double factorial. An easy way to derive these is by differentiating under the integral sign. \begin \int_^\infty x^ e^\,dx &= \left(-1\right)^n\int_^\infty \frac e^\,dx \\ &= \left(-1\right)^n\frac \int_^\infty e^\,dx\\ pt&= \sqrt \left(-1\right)^n\frac\alpha^ \\ &= \sqrt\frac \end One could also integrate by parts and find a
recurrence relation In mathematics, a recurrence relation is an equation according to which the nth term of a sequence of numbers is equal to some combination of the previous terms. Often, only k previous terms of the sequence appear in the equation, for a parameter ...
to solve this.


Higher-order polynomials

Applying a linear change of basis shows that the integral of the exponential of a homogeneous polynomial in ''n'' variables may depend only on SL(''n'')-invariants of the polynomial. One such invariant is the discriminant, zeros of which mark the singularities of the integral. However, the integral may also depend on other invariants. Exponentials of other even polynomials can numerically be solved using series. These may be interpreted as formal calculations when there is no convergence. For example, the solution to the integral of the exponential of a quartic polynomial is \int_^ e^\,dx = \frac e^f \sum_^ \frac \frac \frac \frac. The mod 2 requirement is because the integral from −∞ to 0 contributes a factor of to each term, while the integral from 0 to +∞ contributes a factor of 1/2 to each term. These integrals turn up in subjects such as quantum field theory.


See also

*
List of integrals of Gaussian functions In the expressions in this article, :\phi(x) = \frace^ is the standard normal probability density function, :\Phi(x) = \int_^x \phi(t) \, dt = \frac\left(1 + \operatorname\left(\frac\right)\right) is the corresponding cumulative distribution f ...
*
Common integrals in quantum field theory Common integrals in quantum field theory are all variations and generalizations of Gaussian integrals to the complex plane and to multiple dimensions. Other integrals can be approximated by versions of the Gaussian integral. Fourier integrals are a ...
* Normal distribution *
List of integrals of exponential functions The following is a list of integrals of exponential functions. For a complete list of integral functions, please see the list of integrals. Indefinite integral Indefinite integrals are antiderivative functions. A constant (the constant of inte ...
* Error function * Berezin integral


References


Citations


Sources

* * * {{integral Integrals Articles containing proofs Gaussian function Theorems in analysis