HOME

TheInfoList



OR:

In
mathematics Mathematics is a field of study that discovers and organizes methods, Mathematical theory, theories and theorems that are developed and Mathematical proof, proved for the needs of empirical sciences and mathematics itself. There are many ar ...
, the Euler–Maclaurin formula is a formula for the difference between an
integral In mathematics, an integral is the continuous analog of a Summation, sum, which is used to calculate area, areas, volume, volumes, and their generalizations. Integration, the process of computing an integral, is one of the two fundamental oper ...
and a closely related sum. It can be used to approximate integrals by finite sums, or conversely to evaluate finite sums and
infinite series In mathematics, a series is, roughly speaking, an addition of infinitely many terms, one after the other. The study of series is a major part of calculus and its generalization, mathematical analysis. Series are used in most areas of mathemati ...
using integrals and the machinery of
calculus Calculus is the mathematics, mathematical study of continuous change, in the same way that geometry is the study of shape, and algebra is the study of generalizations of arithmetic operations. Originally called infinitesimal calculus or "the ...
. For example, many asymptotic expansions are derived from the formula, and Faulhaber's formula for the sum of powers is an immediate consequence. The formula was discovered independently by
Leonhard Euler Leonhard Euler ( ; ; ; 15 April 170718 September 1783) was a Swiss polymath who was active as a mathematician, physicist, astronomer, logician, geographer, and engineer. He founded the studies of graph theory and topology and made influential ...
and Colin Maclaurin around 1735. Euler needed it to compute slowly converging infinite series while Maclaurin used it to calculate integrals. It was later generalized to Darboux's formula.


The formula

If and are
natural number In mathematics, the natural numbers are the numbers 0, 1, 2, 3, and so on, possibly excluding 0. Some start counting with 0, defining the natural numbers as the non-negative integers , while others start with 1, defining them as the positive in ...
s and is a real or
complex Complex commonly refers to: * Complexity, the behaviour of a system whose components interact in multiple ways so possible interactions are difficult to describe ** Complex system, a system composed of many components which may interact with each ...
valued
continuous function In mathematics, a continuous function is a function such that a small variation of the argument induces a small variation of the value of the function. This implies there are no abrupt changes in value, known as '' discontinuities''. More preci ...
for
real number In mathematics, a real number is a number that can be used to measure a continuous one- dimensional quantity such as a duration or temperature. Here, ''continuous'' means that pairs of values can have arbitrarily small differences. Every re ...
s in the interval , then the integral I = \int_m^n f(x)\,dx can be approximated by the sum (or vice versa) S = f(m + 1) + \cdots + f(n - 1) + f(n) (see rectangle method). The Euler–Maclaurin formula provides expressions for the difference between the sum and the integral in terms of the higher
derivative In mathematics, the derivative is a fundamental tool that quantifies the sensitivity to change of a function's output with respect to its input. The derivative of a function of a single variable at a chosen input value, when it exists, is t ...
s evaluated at the endpoints of the interval, that is to say and . Explicitly, for a positive
integer An integer is the number zero (0), a positive natural number (1, 2, 3, ...), or the negation of a positive natural number (−1, −2, −3, ...). The negations or additive inverses of the positive natural numbers are referred to as negative in ...
and a function that is times continuously differentiable on the interval , we have S - I = \sum_^p + R_p, where is the th Bernoulli number (with ) and is an error term which depends on , , , and and is usually small for suitable values of . The formula is often written with the subscript taking only even values, since the odd Bernoulli numbers are zero except for . In this case we have \sum_^n f(i) = \int^n_m f(x)\,dx + \frac + \sum_^ \frac \left(f^(n) - f^(m)\right) + R_p, or alternatively \sum_^n f(i) = \int^n_m f(x)\,dx + \frac + \sum_^ \frac \left(f^(n) - f^(m)\right) + R_p.


The remainder term

The remainder term arises because the integral is usually not exactly equal to the sum. The formula may be derived by applying repeated
integration by parts In calculus, and more generally in mathematical analysis, integration by parts or partial integration is a process that finds the integral of a product of functions in terms of the integral of the product of their derivative and antiderivati ...
to successive intervals for . The boundary terms in these integrations lead to the main terms of the formula, and the leftover integrals form the remainder term. The remainder term has an exact expression in terms of the periodized Bernoulli functions . The Bernoulli polynomials may be defined recursively by and, for , \begin B_k'(x) &= kB_(x), \\ \int_0^1 B_k(x)\,dx &= 0. \end The periodized Bernoulli functions are defined as P_k(x) = B_k\bigl(x - \lfloor x\rfloor\bigr), where denotes the largest integer less than or equal to , so that always lies in the interval . With this notation, the remainder term equals R_ = (-1)^\int_m^n f^(x) \frac\,dx. When , it can be shown that for , \bigl, B_k(x)\bigr, \le \frac\zeta(k), where denotes the Riemann zeta function; one approach to prove this inequality is to obtain the Fourier series for the polynomials . The bound is achieved for even when is zero. The term may be omitted for odd but the proof in this case is more complex (see Lehmer). Using this inequality, the size of the remainder term can be estimated as \left, R_p\ \leq \frac\int_m^n \left, f^(x)\\,dx.


Low-order cases

The Bernoulli numbers from to are . Therefore, the low-order cases of the Euler–Maclaurin formula are: \begin \sum_^n f(i) - \int_m^n f(x)\,dx &= \frac + \int_m^n f'(x)P_1(x)\,dx \\ &=\frac + \frac\frac - \int_m^n f''(x)\frac\,dx \\ &=\frac + \frac\frac + \int_m^n f(x)\frac\,dx \\ &=\frac + \frac\frac - \frac\frac-\int_m^n f^(x) \frac\, dx \\ &=\frac + \frac\frac - \frac\frac + \int_m^n f^(x)\frac\,dx \\ &=\frac + \frac\frac - \frac\frac + \frac\frac - \int_m^n f^(x)\frac\,dx \\ &=\frac + \frac\frac - \frac\frac + \frac\frac + \int_m^n f^(x)\frac\,dx. \end


Applications


The Basel problem

The Basel problem is to determine the sum 1 + \frac14 + \frac19 + \frac1 + \frac1 + \cdots = \sum_^\infty \frac. Euler computed this sum to 20 decimal places with only a few terms of the Euler–Maclaurin formula in 1735. This probably convinced him that the sum equals , which he proved in the same year.


Sums involving a polynomial

If is a
polynomial In mathematics, a polynomial is a Expression (mathematics), mathematical expression consisting of indeterminate (variable), indeterminates (also called variable (mathematics), variables) and coefficients, that involves only the operations of addit ...
and is big enough, then the remainder term vanishes. For instance, if , we can choose to obtain, after simplification, \sum_^n i^3 = \left(\frac\right)^2.


Approximation of integrals

The formula provides a means of approximating a finite integral. Let be the endpoints of the interval of integration. Fix , the number of points to use in the approximation, and denote the corresponding step size by . Set , so that and . Then: \begin I & = \int_a^b f(x)\,dx \\ &\sim h\left(\frac + f(x_2) + \cdots + f(x_) + \frac\right) + \frac\bigl '(x_1) - f'(x_N)\bigr- \frac\bigl (x_1) - f(x_N)\bigr+ \cdots \end This may be viewed as an extension of the trapezoid rule by the inclusion of correction terms. Note that this asymptotic expansion is usually not convergent; there is some , depending upon and , such that the terms past order increase rapidly. Thus, the remainder term generally demands close attention. The Euler–Maclaurin formula is also used for detailed error analysis in numerical quadrature. It explains the superior performance of the trapezoidal rule on smooth
periodic function A periodic function, also called a periodic waveform (or simply periodic wave), is a function that repeats its values at regular intervals or periods. The repeatable part of the function or waveform is called a ''cycle''. For example, the t ...
s and is used in certain extrapolation methods. Clenshaw–Curtis quadrature is essentially a change of variables to cast an arbitrary integral in terms of integrals of periodic functions where the Euler–Maclaurin approach is very accurate (in that particular case the Euler–Maclaurin formula takes the form of a
discrete cosine transform A discrete cosine transform (DCT) expresses a finite sequence of data points in terms of a sum of cosine functions oscillating at different frequency, frequencies. The DCT, first proposed by Nasir Ahmed (engineer), Nasir Ahmed in 1972, is a widely ...
). This technique is known as a periodizing transformation.


Asymptotic expansion of sums

In the context of computing
asymptotic expansion In mathematics, an asymptotic expansion, asymptotic series or Poincaré expansion (after Henri Poincaré) is a formal series of functions which has the property that truncating the series after a finite number of terms provides an approximation ...
s of sums and series, usually the most useful form of the Euler–Maclaurin formula is \sum_^b f(n) \sim \int_a^b f(x)\,dx + \frac + \sum_^\infty \,\frac \left(f^(b) - f^(a)\right), where and are integers. Often the expansion remains valid even after taking the limits or or both. In many cases the integral on the right-hand side can be evaluated in closed form in terms of
elementary function In mathematics, an elementary function is a function of a single variable (typically real or complex) that is defined as taking sums, products, roots and compositions of finitely many polynomial, rational, trigonometric, hyperbolic, a ...
s even though the sum on the left-hand side cannot. Then all the terms in the asymptotic series can be expressed in terms of elementary functions. For example, \sum_^\infty \frac \sim \underbrace_ + \frac + \sum_^\infty \frac. Here the left-hand side is equal to , namely the first-order polygamma function defined by :\psi^(z) = \frac\log \Gamma(z); the
gamma function In mathematics, the gamma function (represented by Γ, capital Greek alphabet, Greek letter gamma) is the most common extension of the factorial function to complex numbers. Derived by Daniel Bernoulli, the gamma function \Gamma(z) is defined ...
is equal to when is a
positive integer In mathematics, the natural numbers are the numbers 0, 1, 2, 3, and so on, possibly excluding 0. Some start counting with 0, defining the natural numbers as the non-negative integers , while others start with 1, defining them as the positiv ...
. This results in an asymptotic expansion for . That expansion, in turn, serves as the starting point for one of the derivations of precise error estimates for
Stirling's approximation In mathematics, Stirling's approximation (or Stirling's formula) is an asymptotic approximation for factorials. It is a good approximation, leading to accurate results even for small values of n. It is named after James Stirling, though a related ...
of the
factorial In mathematics, the factorial of a non-negative denoted is the Product (mathematics), product of all positive integers less than or equal The factorial also equals the product of n with the next smaller factorial: \begin n! &= n \times ...
function.


Examples

If is an integer greater than 1 we have: \sum_^n \frac \approx \frac 1+\frac 12-\frac 1+\frac 1+\sum_\frac\left frac-\frac\right Collecting the constants into a value of the Riemann zeta function, we can write an asymptotic expansion: \sum_^n \frac \sim\zeta(s)-\frac 1+\frac 1-\sum_\frac\frac. For equal to 2 this simplifies to \sum_^n \frac \sim\zeta(2)-\frac 1n+\frac 1-\sum_\frac, or \sum_^n \frac \sim \frac -\frac +\frac -\frac+\frac-\frac + \cdots. When , the corresponding technique gives an asymptotic expansion for the
harmonic number In mathematics, the -th harmonic number is the sum of the reciprocals of the first natural numbers: H_n= 1+\frac+\frac+\cdots+\frac =\sum_^n \frac. Starting from , the sequence of harmonic numbers begins: 1, \frac, \frac, \frac, \frac, \dot ...
s: \sum_^n \frac \sim \gamma + \log n + \frac - \sum_^\infty \frac, where is the Euler–Mascheroni constant.


Proofs


Derivation by mathematical induction

We outline the argument given in Apostol. The Bernoulli polynomials and the periodic Bernoulli functions for were introduced above. The first several Bernoulli polynomials are \begin B_0(x) &= 1, \\ B_1(x) &= x - \tfrac, \\ B_2(x) &= x^2 - x + \tfrac, \\ B_3(x) &= x^3 - \tfracx^2 + \tfracx, \\ B_4(x) &= x^4 - 2x^3 + x^2 - \tfrac, \\ &\,\,\,\vdots \end The values are the Bernoulli numbers . Notice that for we have B_n = B_n(1) = B_n(0), and for , B_1 = B_1(1) = -B_1(0). The functions agree with the Bernoulli polynomials on the interval and are periodic with period 1. Furthermore, except when , they are also continuous. Thus, P_n(0) = P_n(1) = B_n \quad \textn \neq 1. Let be an integer, and consider the integral \int_k^ f(x)\,dx = \int_k^ u\,dv, where \begin u &= f(x), \\ du &= f'(x)\,dx, \\ dv &= P_0(x)\,dx & \textP_0(x) &= 1, \\ v &= P_1(x). \end Integrating by parts, we get \begin \int_k^ f(x)\,dx &= \bigl v\bigrk^ - \int_k^ v\,du \\ &= \bigl (x)P_1(x)\bigrk^ - \int_k^ f'(x)P_1(x)\,dx \\ &= B_1(1)f(k+1)-B_1(0)f(k) - \int_k^ f'(x)P_1(x)\,dx. \end Using , , and summing the above from to , we get \begin \int_0^n f(x)\, dx &= \int_0^1 f(x)\,dx + \cdots + \int_^n f(x)\,dx \\ &= \frac+ f(1) + \dotsb + f(n-1) + \frac - \int_0^n f'(x) P_1(x)\,dx. \end Adding to both sides and rearranging, we have \sum_^n f(k) = \int_0^n f(x)\,dx + \frac + \int_0^n f'(x) P_1(x)\,dx. This is the case of the summation formula. To continue the induction, we apply integration by parts to the error term: \int_k^ f'(x)P_1(x)\,dx = \int_k^ u\,dv, where \begin u &= f'(x), \\ du &= f''(x)\,dx, \\ dv &= P_1(x)\,dx, \\ v &= \tfracP_2(x). \end The result of integrating by parts is \begin \bigl v\bigrk^ - \int_k^ v\,du &= \left frac \rightk^ - \frac\int_k^ f''(x)P_2(x)\,dx \\ &= \frac(f'(k + 1) - f'(k)) - \frac\int_k^ f''(x)P_2(x)\,dx. \end Summing from to and substituting this for the lower order error term results in the case of the formula, \sum_^n f(k) = \int_0^n f(x)\,dx + \frac + \frac\bigl(f'(n) - f'(0)\bigr) - \frac\int_0^n f''(x)P_2(x)\,dx. This process can be iterated. In this way we get a proof of the Euler–Maclaurin summation formula which can be formalized by
mathematical induction Mathematical induction is a method for mathematical proof, proving that a statement P(n) is true for every natural number n, that is, that the infinitely many cases P(0), P(1), P(2), P(3), \dots  all hold. This is done by first proving a ...
, in which the induction step relies on integration by parts and on identities for periodic Bernoulli functions.


See also

* Cesàro summation * Euler summation * Gauss–Kronrod quadrature formula * Darboux's formula * Euler–Boole summation


References


Further reading

* * * *


External links

* {{DEFAULTSORT:Euler-Maclaurin Formula Asymptotic analysis Hilbert spaces Numerical integration Articles containing proofs Theorems in mathematical analysis Summability methods Leonhard Euler