Clenshaw algorithm
   HOME

TheInfoList



OR:

In
numerical analysis Numerical analysis is the study of algorithms that use numerical approximation (as opposed to symbolic manipulations) for the problems of mathematical analysis (as distinguished from discrete mathematics). It is the study of numerical methods ...
, the Clenshaw algorithm, also called Clenshaw summation, is a
recursive Recursion (adjective: ''recursive'') occurs when a thing is defined in terms of itself or of its type. Recursion is used in a variety of disciplines ranging from linguistics to logic. The most common application of recursion is in mathematics ...
method to evaluate a linear combination of Chebyshev polynomials. Note that this paper is written in terms of the ''Shifted'' Chebyshev polynomials of the first kind T^*_n(x) = T_n(2x-1). The method was published by Charles William Clenshaw in 1955. It is a generalization of
Horner's method In mathematics and computer science, Horner's method (or Horner's scheme) is an algorithm for polynomial evaluation. Although named after William George Horner, this method is much older, as it has been attributed to Joseph-Louis Lagrange by Horn ...
for evaluating a linear combination of
monomial In mathematics, a monomial is, roughly speaking, a polynomial which has only one term. Two definitions of a monomial may be encountered: # A monomial, also called power product, is a product of powers of variables with nonnegative integer expone ...
s. It generalizes to more than just Chebyshev polynomials; it applies to any class of functions that can be defined by a three-term
recurrence relation In mathematics, a recurrence relation is an equation according to which the nth term of a sequence of numbers is equal to some combination of the previous terms. Often, only k previous terms of the sequence appear in the equation, for a parameter ...
.


Clenshaw algorithm

In full generality, the Clenshaw algorithm computes the weighted sum of a finite series of functions \phi_k(x): :S(x) = \sum_^n a_k \phi_k(x) where \phi_k,\; k=0, 1, \ldots is a sequence of functions that satisfy the linear recurrence relation :\phi_(x) = \alpha_k(x)\,\phi_k(x) + \beta_k(x)\,\phi_(x), where the coefficients \alpha_k(x) and \beta_k(x) are known in advance. The algorithm is most useful when \phi_k(x) are functions that are complicated to compute directly, but \alpha_k(x) and \beta_k(x) are particularly simple. In the most common applications, \alpha(x) does not depend on k, and \beta is a constant that depends on neither x nor k. To perform the summation for given series of coefficients a_0, \ldots, a_n, compute the values b_k(x) by the "reverse" recurrence formula: : \begin b_(x) &= b_(x) = 0, \\ b_k(x) &= a_k + \alpha_k(x)\,b_(x) + \beta_(x)\,b_(x). \end Note that this computation makes no direct reference to the functions \phi_k(x). After computing b_2(x) and b_1(x), the desired sum can be expressed in terms of them and the simplest functions \phi_0(x) and \phi_1(x): :S(x) = \phi_0(x)\,a_0 + \phi_1(x)\,b_1(x) + \beta_1(x)\,\phi_0(x)\,b_2(x). See Fox and Parker for more information and stability analyses.


Examples


Horner as a special case of Clenshaw

A particularly simple case occurs when evaluating a polynomial of the form :S(x) = \sum_^n a_k x^k. The functions are simply : \begin \phi_0(x) &= 1, \\ \phi_k(x) &= x^k = x\phi_(x) \end and are produced by the recurrence coefficients \alpha(x) = x and \beta = 0. In this case, the recurrence formula to compute the sum is :b_k(x) = a_k + x b_(x) and, in this case, the sum is simply :S(x) = a_0 + x b_1(x) = b_0(x), which is exactly the usual
Horner's method In mathematics and computer science, Horner's method (or Horner's scheme) is an algorithm for polynomial evaluation. Although named after William George Horner, this method is much older, as it has been attributed to Joseph-Louis Lagrange by Horn ...
.


Special case for Chebyshev series

Consider a truncated Chebyshev series :p_n(x) = a_0 + a_1T_1(x) + a_2T_2(x) + \cdots + a_nT_n(x). The coefficients in the recursion relation for the Chebyshev polynomials are :\alpha(x) = 2x, \quad \beta = -1, with the initial conditions :T_0(x) = 1, \quad T_1(x) = x. Thus, the recurrence is :b_k(x) = a_k + 2xb_(x) - b_(x) and the final sum is :p_n(x) = a_0 + xb_1(x) - b_2(x). One way to evaluate this is to continue the recurrence one more step, and compute :b_0(x) = 2a_0 + 2xb_1(x) - b_2(x), (note the doubled ''a''0 coefficient) followed by :p_n(x) = \frac\left _0(x) - b_2(x)\right


Meridian arc length on the ellipsoid

Clenshaw summation is extensively used in
geodetic Geodesy ( ) is the Earth science of accurately measuring and understanding Earth's figure (geometric shape and size), orientation in space, and gravity. The field also incorporates studies of how these properties change over time and equivale ...
applications. A simple application is summing the trigonometric series to compute the meridian arc distance on the surface of an ellipsoid. These have the form :m(\theta) = C_0\,\theta + C_1\sin \theta + C_2\sin 2\theta + \cdots + C_n\sin n\theta. Leaving off the initial C_0\,\theta term, the remainder is a summation of the appropriate form. There is no leading term because \phi_0(\theta) = \sin 0\theta = \sin 0 = 0. The recurrence relation for \sin k\theta is :\sin (k+1)\theta = 2 \cos\theta \sin k\theta - \sin (k-1)\theta, making the coefficients in the recursion relation :\alpha_k(\theta) = 2\cos\theta, \quad \beta_k = -1. and the evaluation of the series is given by : \begin b_(\theta) &= b_(\theta) = 0, \\ b_k(\theta) &= C_k + 2\cos \theta \,b_(\theta) - b_(\theta),\quad\mathrm n\ge k \ge 1. \end The final step is made particularly simple because \phi_0(\theta) = \sin 0 = 0, so the end of the recurrence is simply b_1(\theta)\sin(\theta); the C_0\,\theta term is added separately: :m(\theta) = C_0\,\theta + b_1(\theta)\sin \theta. Note that the algorithm requires only the evaluation of two trigonometric quantities \cos \theta and \sin \theta.


Difference in meridian arc lengths

Sometimes it necessary to compute the difference of two meridian arcs in a way that maintains high relative accuracy. This is accomplished by using trigonometric identities to write : m(\theta_1)-m(\theta_2) = C_0(\theta_1-\theta_2) + \sum_^n 2 C_k \sin\bigl(k(\theta_1-\theta_2)\bigr) \cos\bigl(k(\theta_1+\theta_2)\bigr). Clenshaw summation can be applied in this case provided we simultaneously compute m(\theta_1)+m(\theta_2) and perform a matrix summation, : \mathsf M(\theta_1,\theta_2) = \begin (m(\theta_1) + m(\theta_2)) / 2\\ (m(\theta_1) - m(\theta_2)) / (\theta_1 - \theta_2) \end = C_0 \begin\mu\\1\end + \sum_^n C_k \mathsf F_k(\theta_1,\theta_2), where : \begin \delta &= (\theta_1-\theta_2), \\ \mu &= (\theta_1+\theta_2), \\ \mathsf F_k(\theta_1,\theta_2) &= \begin \cos k \delta \sin k \mu\\ \displaystyle\frac\delta \cos k \mu \end. \end The first element of \mathsf M(\theta_1,\theta_2) is the average value of m and the second element is the average slope. \mathsf F_k(\theta_1,\theta_2) satisfies the recurrence relation : \mathsf F_(\theta_1,\theta_2) = \mathsf A(\theta_1,\theta_2) \mathsf F_k(\theta_1,\theta_2) - \mathsf F_(\theta_1,\theta_2), where : \mathsf A(\theta_1,\theta_2) = 2\begin \cos \delta \cos \mu & -\delta\sin \delta \sin \mu \\ - \displaystyle\frac\delta \sin \mu & \cos \delta \cos \mu \end takes the place of \alpha in the recurrence relation, and \beta=-1. The standard Clenshaw algorithm can now be applied to yield : \begin \mathsf B_ &= \mathsf B_ = \mathsf 0, \\ \mathsf B_k &= C_k \mathsf I + \mathsf A \mathsf B_ - \mathsf B_, \qquad\mathrm n\ge k \ge 1,\\ \mathsf M(\theta_1,\theta_2) &= C_0 \begin\mu\\1\end + \mathsf B_1 \mathsf F_1(\theta_1,\theta_2), \end where \mathsf B_k are 2×2 matrices. Finally we have : \frac = \mathsf M_2(\theta_1, \theta_2). This technique can be used in the limit \theta_2 = \theta_1 = \mu and \delta = 0\, to simultaneously compute m(\mu) and the
derivative In mathematics, the derivative of a function of a real variable measures the sensitivity to change of the function value (output value) with respect to a change in its argument (input value). Derivatives are a fundamental tool of calculus. ...
dm(\mu)/d\mu, provided that, in evaluating \mathsf F_1 and \mathsf A, we take \lim_(\sin k \delta)/\delta = k.


See also

*
Horner scheme In mathematics and computer science, Horner's method (or Horner's scheme) is an algorithm for polynomial evaluation. Although named after William George Horner, this method is much older, as it has been attributed to Joseph-Louis Lagrange by Hor ...
to evaluate polynomials in
monomial form In mathematics the monomial basis of a polynomial ring is its basis (as a vector space or free module over the field or ring of coefficients) that consists of all monomials. The monomials form a basis because every polynomial may be uniquely wr ...
* De Casteljau's algorithm to evaluate polynomials in Bézier form


References

{{DEFAULTSORT:Clenshaw Algorithm Numerical analysis