Gaussian quadrature
   HOME

TheInfoList



OR:

In
numerical analysis Numerical analysis is the study of algorithms that use numerical approximation (as opposed to symbolic manipulations) for the problems of mathematical analysis (as distinguished from discrete mathematics). It is the study of numerical methods ...
, a quadrature rule is an approximation of the
definite integral In mathematics, an integral assigns numbers to functions in a way that describes displacement, area, volume, and other concepts that arise by combining infinitesimal data. The process of finding integrals is called integration. Along with ...
of a function, usually stated as a weighted sum of function values at specified points within the domain of integration. (See
numerical integration In analysis, numerical integration comprises a broad family of algorithms for calculating the numerical value of a definite integral, and by extension, the term is also sometimes used to describe the numerical solution of differential equatio ...
for more on quadrature rules.) An -point Gaussian quadrature rule, named after
Carl Friedrich Gauss Johann Carl Friedrich Gauss (; german: Gauß ; la, Carolus Fridericus Gauss; 30 April 177723 February 1855) was a German mathematician and physicist who made significant contributions to many fields in mathematics and science. Sometimes refer ...
, is a quadrature rule constructed to yield an exact result for
polynomial In mathematics, a polynomial is an expression consisting of indeterminates (also called variables) and coefficients, that involves only the operations of addition, subtraction, multiplication, and positive-integer powers of variables. An exampl ...
s of degree or less by a suitable choice of the nodes and weights for . The modern formulation using orthogonal polynomials was developed by Carl Gustav Jacobi in 1826. The most common domain of integration for such a rule is taken as , so the rule is stated as :\int_^1 f(x)\,dx \approx \sum_^n w_i f(x_i), which is exact for polynomials of degree or less. This exact rule is known as the Gauss-Legendre quadrature rule. The quadrature rule will only be an accurate approximation to the integral above if is well-approximated by a polynomial of degree or less on . The Gauss- Legendre quadrature rule is not typically used for integrable functions with endpoint singularities. Instead, if the integrand can be written as :f(x) = \left(1 - x\right)^\alpha \left(1 + x\right)^\beta g(x),\quad \alpha,\beta > -1, where is well-approximated by a low-degree polynomial, then alternative nodes and weights will usually give more accurate quadrature rules. These are known as Gauss-Jacobi quadrature rules, i.e., :\int_^1 f(x)\,dx = \int_^1 \left(1 - x\right)^\alpha \left(1 + x\right)^\beta g(x)\,dx \approx \sum_^n w_i' g\left(x_i'\right). Common weights include \frac ( Chebyshev–Gauss) and \sqrt. One may also want to integrate over semi-infinite ( Gauss-Laguerre quadrature) and infinite intervals (
Gauss–Hermite quadrature In numerical analysis, Gauss–Hermite quadrature is a form of Gaussian quadrature for approximating the value of integrals of the following kind: :\int_^ e^ f(x)\,dx. In this case :\int_^ e^ f(x)\,dx \approx \sum_^n w_i f(x_i) where ''n'' is ...
). It can be shown (see Press, et al., or Stoer and Bulirsch) that the quadrature nodes are the
roots A root is the part of a plant, generally underground, that anchors the plant body, and absorbs and stores water and nutrients. Root or roots may also refer to: Art, entertainment, and media * ''The Root'' (magazine), an online magazine focusing ...
of a polynomial belonging to a class of orthogonal polynomials (the class orthogonal with respect to a weighted inner-product). This is a key observation for computing Gauss quadrature nodes and weights.


Gauss–Legendre quadrature

For the simplest integration problem stated above, i.e., is well-approximated by polynomials on
1, 1 Onekama ( ) is a village in Manistee County in the U.S. state of Michigan. The population was 411 at the 2010 census. The village is located on the shores of Portage Lake and is surrounded by Onekama Township. The town's name is derived from "On ...
/math>, the associated orthogonal polynomials are
Legendre polynomials In physical science and mathematics, Legendre polynomials (named after Adrien-Marie Legendre, who discovered them in 1782) are a system of complete and orthogonal polynomials, with a vast number of mathematical properties, and numerous applica ...
, denoted by . With the -th polynomial normalized to give , the -th Gauss node, , is the -th root of and the weights are given by the formula : w_i = \frac. Some low-order quadrature rules are tabulated below (over interval , see the section below for other intervals).


Change of interval

An integral over must be changed into an integral over before applying the Gaussian quadrature rule. This change of interval can be done in the following way: :\int_a^b f(x)\,dx = \int_^1 f\left(\frac\xi + \frac\right)\,\fracd\xi with \frac=\frac Applying the n point Gaussian quadrature (\xi, w) rule then results in the following approximation: :\int_a^b f(x)\,dx \approx \frac \sum_^n w_i f\left(\frac\xi_i + \frac\right).


Example of Two-Point Gauss Quadrature Rule

Use the two-point Gauss quadrature rule to approximate the distance in meters covered by a rocket from t = 8\mathrm to t = 30\mathrm, as given by x = \int_^ Change the limits so that one can use the weights and abscissas given in Table 1. Also, find the absolute relative true error. The true value is given as 11061.34 m. Solution First, changing the limits of integration from \left 8,30 \right/math> to \left - 1,1 \right/math> gives \begin \int_^ &= \frac\int_^ \\ &= 11\int_^ \end Next, get the weighting factors and function argument values from Table 1 for the two-point rule, *c_1 = 1.000000000 *x_1 = - 0.577350269 *c_2 = 1.000000000 *x_2 = 0.577350269 Now we can use the Gauss quadrature formula \begin 11\int_^ & \approx 11\left c_1 f\left( 11 x_1 + 19 \right) + c_2 f\left( 11 x_2 + 19 \right) \right\\ &= 11\left f\left( 11( - 0.5773503) + 19 \right) + f\left( 11(0.5773503) + 19 \right) \right\\ &= 11\left f(12.64915) + f(25.35085) \right\\ &= 11\left (296.8317) + (708.4811) \right\\ &= 11058.44 \end since \begin f(12.64915) & = 2000\ln\left \frac \right- 9.8(12.64915) \\ &= 296.8317 \end \begin f(25.35085) & = 2000\ln\left \frac \right- 9.8(25.35085) \\ &= 708.4811 \end Given that the true value is 11061.34 m, the absolute relative true error, \left, \varepsilon_ \ is \left, \varepsilon_ \ = \left, \frac \ \times 100\% = 0.0262\%


Other forms

The integration problem can be expressed in a slightly more general way by introducing a positive
weight function A weight function is a mathematical device used when performing a sum, integral, or average to give some elements more "weight" or influence on the result than other elements in the same set. The result of this application of a weight function is ...
into the integrand, and allowing an interval other than . That is, the problem is to calculate : \int_a^b \omega(x)\,f(x)\,dx for some choices of , , and . For , , and , the problem is the same as that considered above. Other choices lead to other integration rules. Some of these are tabulated below. Equation numbers are given for
Abramowitz and Stegun ''Abramowitz and Stegun'' (''AS'') is the informal name of a 1964 mathematical reference work edited by Milton Abramowitz and Irene Stegun of the United States National Bureau of Standards (NBS), now the ''National Institute of Standards and ...
(A & S).


Fundamental theorem

Let be a nontrivial polynomial of degree such that :\int_a^b \omega(x) \, x^k p_n(x) \, dx = 0, \quad \text k = 0, 1, \ldots, n - 1. Note that this will be true for all the orthogonal polynomials above, because each is constructed to be orthogonal to the other polynomials for , and is in the span of that set. If we pick the nodes to be the zeros of , then there exist weights which make the Gauss-quadrature computed integral exact for all polynomials of degree or less. Furthermore, all these nodes will lie in the open interval . To prove the first part of this claim, let be any polynomial of degree or less. Divide it by the orthogonal polynomial to get : h(x) = p_n(x) \, q(x) + r(x). where is the quotient, of degree or less (because the sum of its degree and that of the divisor must equal that of the dividend), and is the remainder, also of degree or less (because the degree of the remainder is always less than that of the divisor). Since is by assumption orthogonal to all monomials of degree less than , it must be orthogonal to the quotient . Therefore : \int_a^b \omega(x)\,h(x)\,dx = \int_a^b \omega(x)\,\big( \, p_n(x) q(x) + r(x) \, \big)\,dx = \int_a^b \omega(x)\,r(x)\,dx. Since the remainder is of degree or less, we can interpolate it exactly using interpolation points with
Lagrange polynomials In numerical analysis, the Lagrange interpolating polynomial is the unique polynomial of lowest degree that interpolates a given set of data. Given a data set of coordinate pairs (x_j, y_j) with 0 \leq j \leq k, the x_j are called ''nodes'' an ...
, where : l_i(x) = \prod _ \frac. We have : r(x) = \sum_^n l_i(x) \, r(x_i). Then its integral will equal : \int_a^b \omega(x)\,r(x)\,dx = \int_a^b \omega(x) \, \sum_^n l_i(x) \, r(x_i) \, dx = \sum_^n \, r(x_i) \, \int_a^b \omega(x) \, l_i(x) \, dx = \sum_^n \, r(x_i) \, w_i, where , the weight associated with the node , is defined to equal the weighted integral of (see below for other formulas for the weights). But all the are roots of , so the division formula above tells us that : h(x_i) = p_n(x_i) \, q(x_i) + r(x_i) = r(x_i), for all . Thus we finally have : \int_a^b \omega(x)\,h(x)\,dx = \int_a^b \omega(x)\,r(x)\,dx = \sum_^n w_i \, r(x_i) = \sum_^n w_i \, h(x_i). This proves that for any polynomial of degree or less, its integral is given exactly by the Gaussian quadrature sum. To prove the second part of the claim, consider the factored form of the polynomial . Any complex conjugate roots will yield a quadratic factor that is either strictly positive or strictly negative over the entire real line. Any factors for roots outside the interval from to will not change sign over that interval. Finally, for factors corresponding to roots inside the interval from to that are of odd multiplicity, multiply by one more factor to make a new polynomial : p_n(x) \, \prod_i (x - x_i). This polynomial cannot change sign over the interval from to because all its roots there are now of even multiplicity. So the integral : \int_a^b p_n(x) \, \left( \prod_i (x - x_i) \right) \, \omega(x) \, dx \ne 0, since the weight function is always non-negative. But is orthogonal to all polynomials of degree or less, so the degree of the product : \prod_i (x - x_i) must be at least . Therefore has distinct roots, all real, in the interval from to .


General formula for the weights

The weights can be expressed as where a_ is the coefficient of x^ in p_(x). To prove this, note that using Lagrange interpolation one can express in terms of r(x_) as :r(x) = \sum_^r(x_)\prod_\frac because has degree less than and is thus fixed by the values it attains at different points. Multiplying both sides by and integrating from to yields :\int_^\omega(x)r(x)dx= \sum_^r(x_)\int_^\omega(x)\prod_\fracdx The weights are thus given by :w_ = \int_^\omega(x)\prod_\fracdx This integral expression for w_ can be expressed in terms of the orthogonal polynomials p_(x) and p_(x) as follows. We can write :\prod_\left(x-x_\right) = \frac = \frac where a_ is the coefficient of x^n in p_(x). Taking the limit of to x_ yields using L'Hôpital's rule :\prod_\left(x_-x_\right) = \frac We can thus write the integral expression for the weights as In the integrand, writing :\frac = \frac + \left(\frac\right)^ \frac yields :\int_a^b\omega(x)\fracdx= x_i^k\int_^\omega(x)\fracdx provided k \leq n, because :\frac is a polynomial of degree which is then orthogonal to p_(x). So, if is a polynomial of at most nth degree we have :\int_^\omega(x)\fracdx=\frac\int_^\omega(x)\fracdx We can evaluate the integral on the right hand side for q(x) = p_(x) as follows. Because \frac is a polynomial of degree , we have :\frac = a_x^ + s(x) where is a polynomial of degree n - 2. Since is orthogonal to p_(x) we have :\int_^\omega(x)\fracdx=\frac\int_^\omega(x)p_(x)x^dx We can then write :x^ = \left(x^ - \frac\right) + \frac The term in the brackets is a polynomial of degree n - 2, which is therefore orthogonal to p_(x). The integral can thus be written as :\int_^\omega(x)\fracdx=\frac\int_^\omega(x)p_(x)^dx According to equation (), the weights are obtained by dividing this by p'_(x_) and that yields the expression in equation (). w_ can also be expressed in terms of the orthogonal polynomials p_(x) and now p_(x). In the 3-term recurrence relation p_(x_) = (a)p_(x_) + (b)p_(x_) the term with p_(x_) vanishes, so p_(x_) in Eq. (1) can be replaced by \frac p_ \left(x_i\right).


Proof that the weights are positive

Consider the following polynomial of degree 2n - 2 :f(x) = \prod_\frac where, as above, the are the roots of the polynomial p_(x). Clearly f(x_j) = \delta_. Since the degree of f(x) is less than 2n - 1, the Gaussian quadrature formula involving the weights and nodes obtained from p_(x) applies. Since f(x_) = 0 for j not equal to i, we have :\int_^\omega(x)f(x)dx=\sum_^w_f(x_) = \sum_^ \delta_ w_j = w_ > 0. Since both \omega(x) and f(x) are non-negative functions, it follows that w_ > 0.


Computation of Gaussian quadrature rules

There are many algorithms for computing the nodes and weights of Gaussian quadrature rules. The most popular are the Golub-Welsch algorithm requiring operations, Newton's method for solving p_n(x) = 0 using the three-term recurrence for evaluation requiring operations, and asymptotic formulas for large ''n'' requiring operations.


Recurrence relation

Orthogonal polynomials p_r with (p_r, p_s) = 0 for r \ne s for a scalar product (\, \,), degree (p_r) = r and leading coefficient one (i.e. monic orthogonal polynomials) satisfy the recurrence relation :p_(x) = (x - a_)p_r(x) - a_p_(x)\cdots - a_p_0(x) and scalar product defined :(f(x),g(x))=\int_a^b\omega(x)f(x)g(x)dx for r = 0, 1, \ldots, n - 1 where is the maximal degree which can be taken to be infinity, and where a_ = \frac. First of all, the polynomials defined by the recurrence relation starting with p_0(x) = 1 have leading coefficient one and correct degree. Given the starting point by p_0, the orthogonality of p_r can be shown by induction. For r = s = 0 one has :(p_1,p_0)=(x-a_)(p_0,p_0)=(xp_0,p_0)-a_(p_0,p_0)=(xp_0,p_0)-(xp_0,p_0)=0. Now if p_0, p_1, \ldots, p_r are orthogonal, then also p_, because in :(p_, p_s) = (xp_r, p_s) - a_(p_r, p_s) - a_(p_, p_s)\cdots - a_(p_0, p_s) all scalar products vanish except for the first one and the one where p_s meets the same orthogonal polynomial. Therefore, :(p_,p_s)=(xp_r,p_s)-a_(p_s,p_s)=(xp_r,p_s)-(xp_r,p_s)=0. However, if the scalar product satisfies (xf, g) = (f,xg) (which is the case for Gaussian quadrature), the recurrence relation reduces to a three-term recurrence relation: For s < r - 1, xp_s is a polynomial of degree less than or equal to . On the other hand, p_r is orthogonal to every polynomial of degree less than or equal to . Therefore, one has (xp_r, p_s) = (p_r, xp_s) = 0 and a_ = 0 for . The recurrence relation then simplifies to :p_(x)=(x-a_)p_r(x)-a_p_(x) or :p_(x)=(x-a_r)p_r(x)-b_rp_(x) (with the convention p_(x) \equiv 0) where :a_r:=\frac,\qquad b_r:=\frac=\frac (the last because of (xp_r, p_) = (p_r, xp_) = (p_r, p_r), since xp_ differs from p_r by a degree less than ).


The Golub-Welsch algorithm

The three-term recurrence relation can be written in matrix form J\tilde = x\tilde - p_n(x) \times \mathbf_n where \tilde = \begin p_0(x) & p_1(x) & \ldots & p_(x) \end^\mathsf, \mathbf_n is the nth standard basis vector, i.e., \mathbf_n = \begin 0 & \ldots & 0 & 1 \end^\mathsf, and is the so-called Jacobi matrix: :\mathbf=\begin a_0 & 1 & 0 & \ldots & \ldots & \ldots \\ b_1 & a_1 & 1 & 0 & \ldots & \ldots \\ 0 & b_2 & a_2 & 1 & 0 & \ldots \\ 0 & \ldots & \ldots & \ldots & \ldots & 0 \\ \ldots & \ldots & 0 & b_ & a_ & 1 \\ \ldots & \ldots & \ldots & 0 & b_ & a_ \end The zeros x_j of the polynomials up to degree , which are used as nodes for the Gaussian quadrature can be found by computing the eigenvalues of this
tridiagonal matrix In linear algebra, a tridiagonal matrix is a band matrix that has nonzero elements only on the main diagonal, the subdiagonal/lower diagonal (the first diagonal below this), and the supradiagonal/upper diagonal (the first diagonal above the main ...
. This procedure is known as ''Golub–Welsch algorithm''. For computing the weights and nodes, it is preferable to consider the symmetric tridiagonal matrix \mathcal with elements :\begin \mathcal_ = J_ &= a_ && i=1,\ldots,n \\ \mathcal_ = \mathcal_ = \sqrt &= \sqrt && i=2,\ldots,n. \end and \mathcal are similar matrices and therefore have the same eigenvalues (the nodes). The weights can be computed from the corresponding eigenvectors: If \phi^ is a normalized eigenvector (i.e., an eigenvector with euclidean norm equal to one) associated to the eigenvalue , the corresponding weight can be computed from the first component of this eigenvector, namely: :w_j=\mu_0 \left(\phi_1^\right)^2 where \mu_0 is the integral of the weight function :\mu_0=\int_a^b \omega(x) dx. See, for instance, for further details.


Error estimates

The error of a Gaussian quadrature rule can be stated as follows . For an integrand which has continuous derivatives, : \int_a^b \omega(x)\,f(x)\,dx - \sum_^n w_i\,f(x_i) = \frac \, (p_n, p_n) for some in , where is the monic (i.e. the leading coefficient is ) orthogonal polynomial of degree and where : (f,g) = \int_a^b \omega(x) f(x) g(x) \, dx. In the important special case of , we have the error estimate : \frac f^ (\xi), \qquad a < \xi < b. Stoer and Bulirsch remark that this error estimate is inconvenient in practice, since it may be difficult to estimate the order derivative, and furthermore the actual error may be much less than a bound established by the derivative. Another approach is to use two Gaussian quadrature rules of different orders, and to estimate the error as the difference between the two results. For this purpose, Gauss–Kronrod quadrature rules can be useful.


Gauss–Kronrod rules

If the interval is subdivided, the Gauss evaluation points of the new subintervals never coincide with the previous evaluation points (except at zero for odd numbers), and thus the integrand must be evaluated at every point. ''Gauss–Kronrod rules'' are extensions of Gauss quadrature rules generated by adding points to an -point rule in such a way that the resulting rule is of order . This allows for computing higher-order estimates while re-using the function values of a lower-order estimate. The difference between a Gauss quadrature rule and its Kronrod extension is often used as an estimate of the approximation error.


Gauss–Lobatto rules

Also known as Lobatto quadrature , named after Dutch mathematician
Rehuel Lobatto Rehuel Lobatto (6 June 1797 – 9 February 1866 ) was a Dutch mathematician. The Gauss-Lobatto quadrature method is named after him, as are his variants on the Runge–Kutta methods for solving ODEs, and the Lobatto polynomials. He was ...
. It is similar to Gaussian quadrature with the following differences: # The integration points include the end points of the integration interval. # It is accurate for polynomials up to degree , where is the number of integration points . Lobatto quadrature of function on interval : :\int_^1 = \frac (1) + f(-1)+ \sum_^ + R_n. Abscissas: is the (i - 1)st zero of P'_(x), here P_m(x)denotes the standard Legendre polynomial of m-th degree and the dash denotes the derivative. Weights: :w_i = \frac, \qquad x_i \ne \pm 1. Remainder: :R_n = \frac f^(\xi), \qquad -1 < \xi < 1. Some of the weights are: An adaptive variant of this algorithm with 2 interior nodes is found in
GNU Octave GNU Octave is a high-level programming language primarily intended for scientific computing and numerical computation. Octave helps in solving linear and nonlinear problems numerically, and for performing other numerical experiments using a lan ...
and
MATLAB MATLAB (an abbreviation of "MATrix LABoratory") is a proprietary multi-paradigm programming language and numeric computing environment developed by MathWorks. MATLAB allows matrix manipulations, plotting of functions and data, implementat ...
as quadl and integrate.


References


Implementing an Accurate Generalized Gaussian Quadrature Solution to Find the Elastic Field in a Homogeneous Anisotropic Media
* * * * * * * * * * * * * * . * * * * * Walter Gautschi: "A Software Repository for Gaussian Quadratures and Christoffel Functions", SIAM, (2020). ;Specific


External links

*
ALGLIB
contains a collection of algorithms for numerical integration (in C# / C++ / Delphi / Visual Basic / etc.)
GNU Scientific Library
— includes C version of
QUADPACK QUADPACK is a FORTRAN 77 library for numerical integration of one-dimensional functions. It was included in the SLATEC Common Mathematical Library and is therefore in the public domain. The individual subprograms are also available on netlib ...
algorithms (see also
GNU Scientific Library The GNU Scientific Library (or GSL) is a software library for numerical computations in applied mathematics and science. The GSL is written in C; wrappers are available for other programming languages. The GSL is part of the GNU Project and is ...
)
From Lobatto Quadrature to the Euler constant e


at ''Holistic Numerical Methods Institute'' * {{MathWorld, id=Legendre-GaussQuadrature, title=Legendre-Gauss Quadrature
Gaussian Quadrature
by Chris Maes and Anton Antonov, Wolfram Demonstrations Project.
Tabulated weights and abscissae with Mathematica source code
high precision (16 and 256 decimal places) Legendre-Gaussian quadrature weights and abscissas, for ''n''=2 through ''n''=64, with Mathematica source code.

for abscissas and weights generation for arbitrary weighting functions W(x), integration domains and precisions.

* ttp://www.boost.org/doc/libs/release/libs/math/doc/html/math_toolkit/gauss_kronrod.html Gauss-Kronrod Quadrature in Boost.Math
Nodes and Weights of Gaussian quadrature
Numerical integration (quadrature)