
The Chebyshev polynomials are two sequences of
orthogonal polynomials
In mathematics, an orthogonal polynomial sequence is a family of polynomials such that any two different polynomials in the sequence are orthogonal
In mathematics, orthogonality (mathematics), orthogonality is the generalization of the geom ...
related to the
cosine and sine functions, notated as
and
. They can be defined in several equivalent ways, one of which starts with
trigonometric functions
In mathematics, the trigonometric functions (also called circular functions, angle functions or goniometric functions) are real functions which relate an angle of a right-angled triangle to ratios of two side lengths. They are widely used in all ...
:
The Chebyshev polynomials of the first kind
are defined by
Similarly, the Chebyshev polynomials of the second kind
are defined by
That these expressions define polynomials in
is not obvious at first sight but can be shown using
de Moivre's formula (see
below).
The Chebyshev polynomials are polynomials with the largest possible leading coefficient whose
absolute value
In mathematics, the absolute value or modulus of a real number x, is the non-negative value without regard to its sign. Namely, , x, =x if x is a positive number, and , x, =-x if x is negative (in which case negating x makes -x positive), ...
on the
interval is bounded by 1. They are also the "extremal" polynomials for many other properties.
In 1952,
Cornelius Lanczos showed that the Chebyshev polynomials are important in
approximation theory
In mathematics, approximation theory is concerned with how function (mathematics), functions can best be approximation, approximated with simpler functions, and with quantitative property, quantitatively characterization (mathematics), characteri ...
for the solution of linear systems; the
roots
A root is the part of a plant, generally underground, that anchors the plant body, and absorbs and stores water and nutrients.
Root or roots may also refer to:
Art, entertainment, and media
* ''The Root'' (magazine), an online magazine focusin ...
of , which are also called ''
Chebyshev nodes'', are used as matching points for optimizing
polynomial interpolation. The resulting interpolation polynomial minimizes the problem of
Runge's phenomenon and provides an approximation that is close to the best polynomial approximation to a
continuous function
In mathematics, a continuous function is a function such that a small variation of the argument induces a small variation of the value of the function. This implies there are no abrupt changes in value, known as '' discontinuities''. More preci ...
under the
maximum norm, also called the "
minimax" criterion. This approximation leads directly to the method of
Clenshaw–Curtis quadrature.
These polynomials were named after
Pafnuty Chebyshev. The letter is used because of the alternative
transliteration
Transliteration is a type of conversion of a text from one script to another that involves swapping letters (thus '' trans-'' + '' liter-'') in predictable ways, such as Greek → and → the digraph , Cyrillic → , Armenian → or L ...
s of the name ''Chebyshev'' as , (French) or (German).
Definitions
Recurrence definition
The ''Chebyshev polynomials of the first kind'' can be defined by the recurrence relation
The ''Chebyshev polynomials of the second kind'' can be defined by the recurrence relation
which differs from the above only by the rule for ''n=1''.
Trigonometric definition
The Chebyshev polynomials of the first and second kind can be defined as the unique polynomials satisfying
and
for .
An equivalent way to state this is via exponentiation of a
complex number
In mathematics, a complex number is an element of a number system that extends the real numbers with a specific element denoted , called the imaginary unit and satisfying the equation i^= -1; every complex number can be expressed in the for ...
: given a complex number with absolute value of one,
Chebyshev polynomials can be defined in this form when studying
trigonometric polynomials.
That
is an
th-
degree polynomial in
can be seen by observing that
is the
real part
In mathematics, a complex number is an element of a number system that extends the real numbers with a specific element denoted , called the imaginary unit and satisfying the equation i^= -1; every complex number can be expressed in the form ...
of one side of
de Moivre's formula:
The real part of the other side is a polynomial in
and
, in which all powers of
are
even and thus replaceable through the identity
. By the same reasoning,
is the
imaginary part
In mathematics, a complex number is an element of a number system that extends the real numbers with a specific element denoted , called the imaginary unit and satisfying the equation i^= -1; every complex number can be expressed in the form ...
of the polynomial, in which all powers of
are
odd and thus, if one factor of
is factored out, the remaining factors can be replaced to create a
st-degree polynomial in
.
For
outside the interval
1,1 the above definition implies
Commuting polynomials definition
Chebyshev polynomials can also be characterized by the following theorem:
If
is a family of monic polynomials with coefficients in a field of characteristic
such that
and
for all
and
, then, up to a simple change of variables, either
for all
or
for all
.
Pell equation definition
The Chebyshev polynomials can also be defined as the solutions to the
Pell equation:
in a
ring
Generating functions
The
ordinary generating function for
T_n is
\sum_^\infty T_n(x)\,t^n = \frac.
There are several other
generating functions for the Chebyshev polynomials; the
exponential generating function is
\begin
\sum_^\infty T_n(x) \frac
&= \Bigl(\Bigl(\Bigr)
+ \Bigl(\Bigr)\Bigr) \\
&= e^ \cosh\left(~\! \right).
\end
The generating function relevant for 2-dimensional
potential theory
In mathematics and mathematical physics, potential theory is the study of harmonic functions.
The term "potential theory" was coined in 19th-century physics when it was realized that the two fundamental forces of nature known at the time, namely g ...
and
multipole expansion
A multipole expansion is a mathematical series representing a function that depends on angles—usually the two angles used in the spherical coordinate system (the polar and azimuthal angles) for three-dimensional Euclidean space, \R^3. Multipo ...
is
\sum\limits_^\infty T_(x)\,\frac = \ln\left(\frac\right).
The ordinary generating function for is
\sum_^\infty U_n(x)\,t^n = \frac,
and the exponential generating function is
\sum_^\infty U_n(x) \frac = e^ \biggl(\!\cosh\left(t\sqrt\right) + \frac \sinh\left(t\sqrt\right)\biggr).
Relations between the two kinds of Chebyshev polynomials
The Chebyshev polynomials of the first and second kinds correspond to a complementary pair of
Lucas sequences
\tilde V_n(P,Q) and
\tilde U_n(P,Q) with parameters
P=2x and
Q=1:
\begin
_n(2x,1) &= U_(x), \\
_n(2x,1) &= 2\, T_n(x).
\end
It follows that they also satisfy a pair of mutual recurrence equations:
\begin
T_(x) &= x\,T_n(x) - (1 - x^2)\,U_(x), \\
U_(x) &= x\,U_n(x) + T_(x).
\end
The second of these may be rearranged using the
recurrence definition for the Chebyshev polynomials of the second kind to give:
T_n(x) = \frac \big(U_n(x) - U_(x)\big).
Using this formula iteratively gives the sum formula:
U_n(x) = \begin
2\sum_^n T_j(x) & \textn.\\
2\sum_^n T_j(x) - 1 & \textn,
\end
while replacing
U_n(x) and
U_(x) using the
derivative formula for
T_n(x) gives the recurrence relationship for the derivative of
T_n:
2\,T_n(x) = \frac\, \frac\, T_(x) - \frac\,\frac\, T_(x), \qquad n=2,3,\ldots
This relationship is used in the
Chebyshev spectral method of solving differential equations.
Turán's inequalities for the Chebyshev polynomials are:
\begin
T_n(x)^2 - T_(x)\,T_(x)&= 1-x^2 > 0 &&\text -1 0~.
\end
The
integral
In mathematics, an integral is the continuous analog of a Summation, sum, which is used to calculate area, areas, volume, volumes, and their generalizations. Integration, the process of computing an integral, is one of the two fundamental oper ...
relations are
\begin
\int_^1 \frac \, \frac &= \pi\,U_(x)~, \\ .5ex\int_^1\frac\, \sqrt\mathrmy &= -\pi\,T_n(x)
\end
where integrals are considered as principal value.
Explicit expressions
Using the complex number exponentiation definition of the Chebyshev polynomial, one can derive the following expressions, valid for any real :
\begin
T_n(x)
&= \tfrac \Big( \bigl(\bigr)^n + \bigl(\bigr)^n \Big) \\ mu&= \tfrac \Big( \bigl(\bigr)^n + \bigl(\bigr)^ \Big).
\end
The two are equivalent because
\textstyle \bigl(x + \sqrt\!~\bigr)\bigl(x - \sqrt\!~\bigr) = 1.
An explicit form of the Chebyshev polynomial in terms of monomials
x^k follows from
de Moivre's formula:
T_n(\cos(\theta)) = \operatorname(\cos n \theta + i \sin n \theta) = \operatorname((\cos \theta + i \sin \theta)^n),
where
\mathrm denotes the
real part
In mathematics, a complex number is an element of a number system that extends the real numbers with a specific element denoted , called the imaginary unit and satisfying the equation i^= -1; every complex number can be expressed in the form ...
of a complex number. Expanding the formula, one gets
(\cos \theta + i \sin \theta)^n = \sum\limits_^n \binom i^j \sin^j \theta \cos^ \theta.
The real part of the expression is obtained from summands corresponding to even indices. Noting
i^ = (-1)^j and
\sin^ \theta = (1-\cos^2 \theta)^j, one gets the explicit formula:
\cos n \theta = \sum\limits_^ \binom (\cos^2 \theta - 1)^j \cos^ \theta,
which in turn means that
T_n(x) = \sum\limits_^ \binom (x^2-1)^j x^.
This can be written as a
hypergeometric function
In mathematics, the Gaussian or ordinary hypergeometric function 2''F''1(''a'',''b'';''c'';''z'') is a special function represented by the hypergeometric series, that includes many other special functions as specific or limiting cases. It is ...
:
\begin
T_n(x) & = \sum_^ \binom \left (x^2-1 \right )^k x^ \\
& = x^n \sum_^ \binom \left (1 - x^ \right )^k \\
& = \frac \sum_^(-1)^k \frac~(2x)^ \quad \text n > 0 \\
\\
& = n \sum_^(-2)^ \frac (1 - x)^k \quad \text n > 0 \\
\\
& = _2F_1\!\left(-n,n;\tfrac 1 2; \tfrac(1-x)\right) \\
\end
with inverse
x^n = 2^\mathop^n_ \!\!\binom\!\;T_j(x),
where the prime at the summation symbol indicates that the contribution of
j=0 needs to be halved if it appears.
A related expression for
T_n as a sum of monomials with binomial coefficients and powers of two is
T_n(x) = \sum\limits_^ (-1)^m \left(\binom + \binom\right) \cdot 2^ \cdot x^.
Similarly,
U_n can be expressed in terms of hypergeometric functions:
\begin
U_n(x) &= \frac \\
&= \sum_^ \binom \left (x^2-1 \right )^k x^ \\
&= x^n \sum_^ \binom \left (1 - x^ \right )^k \\
&= \sum_^ \binom~(2x)^ & \text n > 0 \\
&= \sum_^ (-1)^k \binom~(2x)^ & \text n > 0 \\
&= \sum_^(-2)^ \frac (1 - x)^k & \text n > 0 \\
&= (n + 1)\, _2F_1\big(-n, n + 2; \tfrac; \tfrac(1 - x)\big).
\end
Properties
Symmetry
\begin
T_n(-x) &= (-1)^n\, T_n(x),\\ ex U_n(-x) &= (-1)^n\, U_n(x).
\end
That is, Chebyshev polynomials of even order have
even symmetry and therefore contain only even powers of
x. Chebyshev polynomials of odd order have
odd symmetry and therefore contain only odd powers of
x.
Roots and extrema
A Chebyshev polynomial of either kind with degree has different
simple roots, called Chebyshev roots, in the interval . The roots of the Chebyshev polynomial of the first kind are sometimes called
Chebyshev nodes because they are used as ''nodes'' in polynomial interpolation. Using the trigonometric definition and the fact that:
\cos\left((2k+1)\frac\right)=0
one can show that the roots of
T_n are:
x_k = \cos\left(\frac\right),\quad k=0,\ldots,n-1.
Similarly, the roots of
U_n are:
x_k = \cos\left(\frac\pi\right),\quad k=1,\ldots,n.
The
extrema of
T_n on the interval
-1\leq x\leq 1 are located at:
x_k = \cos\left(\frac\pi\right),\quad k=0,\ldots,n.
One unique property of the Chebyshev polynomials of the first kind is that on the interval
-1\leq x\leq 1 all of the
extrema have values that are either −1 or 1. Thus these polynomials have only two finite
critical value Critical value or threshold value can refer to:
* A quantitative threshold in medicine, chemistry and physics
* Critical value (statistics), boundary of the acceptance region while testing a statistical hypothesis
* Value of a function at a crit ...
s, the defining property of
Shabat polynomials. Both the first and second kinds of Chebyshev polynomial have extrema at the endpoints, given by:
\begin
T_n(1) &= 1 \\
T_n(-1) &= (-1)^n \\
U_n(1) &= n+1 \\
U_n(-1) &= (-1)^n (n+1).
\end
The
extrema of
T_n(x) on the interval
-1 \leq x \leq 1 where
n>0 are located at
n+1 values of
x. They are
\pm 1, or
\cos\left(\frac\right) where
d > 2,
d \;, \; 2n,
0 < k < d/2 and
(k, d) = 1, i.e.,
k and
d are relatively prime numbers.
Specifically (
Minimal polynomial of 2cos(2pi/n)) when
n is even:
*
T_n(x) = 1 if
x = \pm 1, or
d > 2 and
2n/d is even. There are
n/2 + 1 such values of
x.
*
T_n(x) = -1 if
d > 2 and
2n/d is odd. There are
n/2 such values of
x.
When
n is odd:
*
T_n(x) = 1 if
x = 1, or
d > 2 and
2n/d is even. There are
(n+1)/2 such values of
x.
*
T_n(x) = -1 if
x = -1, or
d > 2 and
2n/d is odd. There are
(n+1)/2 such values of
x.
Differentiation and integration
The derivatives of the polynomials can be less than straightforward. By differentiating the polynomials in their trigonometric forms, it can be shown that:
\begin
\frac &= n U_ \\
\frac &= \frac \\
\frac &= n\, \frac = n\, \frac.
\end
The last two formulas can be numerically troublesome due to the division by zero (
indeterminate form, specifically) at
x=1 and
x=-1. By
L'Hôpital's rule:
\begin
\left. \frac \_ \!\! &= \frac, \\
\left. \frac \_ \!\! &= (-1)^n \frac.
\end
More generally,
\left.\frac \_ \!\! = (\pm 1)^\prod_^\frac~,
which is of great use in the numerical solution of
eigenvalue
In linear algebra, an eigenvector ( ) or characteristic vector is a vector that has its direction unchanged (or reversed) by a given linear transformation. More precisely, an eigenvector \mathbf v of a linear transformation T is scaled by a ...
problems.
Also, we have:
\frac\,T_n(x) = 2^p\,n\mathop_
\binom\frac\,T_k(x),~\qquad p \ge 1,
where the prime at the summation symbols means that the term contributed by is to be halved, if it appears.
Concerning integration, the first derivative of the implies that:
\int U_n\, \mathrmx = \frac
and the recurrence relation for the first kind polynomials involving derivatives establishes that for
n\geq 2:
\int T_n\, \mathrmx = \frac\,\left(\frac - \frac\right) = \frac - \frac.
The last formula can be further manipulated to express the integral of
T_n as a function of Chebyshev polynomials of the first kind only:
\begin
\int T_n\, \mathrmx &= \frac T_ - \frac T_1 T_n \\
&= \frac\,T_ - \frac\,(T_ + T_) \\
&= \frac\,T_ - \frac\,T_.
\end
Furthermore, we have:
\int_^1 T_n(x)\, \mathrmx =
\begin
\frac & \text~ n \ne 1 \\
0 & \text~ n = 1.
\end
Products of Chebyshev polynomials
The Chebyshev polynomials of the first kind satisfy the relation:
T_m(x)\,T_n(x) = \tfrac\!\left(T_(x) + T_(x)\right)\!,\qquad \forall m,n \ge 0,
which is easily proved from the
product-to-sum formula for the cosine:
2 \cos \alpha \, \cos \beta = \cos (\alpha + \beta) + \cos (\alpha - \beta).
For
n=1 this results in the already known recurrence formula, just arranged differently, and with
n=2 it forms the recurrence relation for all even or all odd indexed Chebyshev polynomials (depending on the parity of the lowest ) which implies the evenness or oddness of these polynomials. Three more useful formulas for evaluating Chebyshev polynomials can be concluded from this product expansion:
\begin
T_(x) &= 2\,T_n^2(x) - T_0(x) &&= 2 T_n^2(x) - 1, \\
T_(x) &= 2\,T_(x)\,T_n(x) - T_1(x) &&= 2\,T_(x)\,T_n(x) - x, \\
T_(x) &= 2\,T_(x)\,T_n(x) - T_1(x) &&= 2\,T_(x)\,T_n(x) - x .
\end
The polynomials of the second kind satisfy the similar relation:
T_m(x)\,U_n(x) = \begin
\frac\left(U_(x) + U_(x)\right), & ~\text~ n \ge m-1,\\
\\
\frac\left(U_(x) - U_(x)\right), & ~\text~ n \le m-2.
\end
(with the definition
U_\equiv 0 by convention ). They also satisfy:
U_m(x)\,U_n(x) = \sum_^n\,U_(x) = \sum_\underset^ U_p(x)~.
for
m\geq n.
For
n=2 this recurrence reduces to:
U_(x) = U_2(x)\,U_m(x) - U_m(x) - U_(x) = U_m(x)\,\big(U_2(x) - 1\big) - U_(x)~,
which establishes the evenness or oddness of the even or odd indexed Chebyshev polynomials of the second kind depending on whether
m starts with 2 or 3.
Composition and divisibility properties
The trigonometric definitions of
T_n and
U_n imply the composition or nesting properties:
\begin
T_(x) &= T_m(T_n(x)),\\
U_(x) &= U_(T_n(x))U_(x).
\end
For
T_ the order of composition may be reversed, making the family of polynomial functions
T_n a
commutative
In mathematics, a binary operation is commutative if changing the order of the operands does not change the result. It is a fundamental property of many binary operations, and many mathematical proofs depend on it. Perhaps most familiar as a pr ...
semigroup
In mathematics, a semigroup is an algebraic structure consisting of a set together with an associative internal binary operation on it.
The binary operation of a semigroup is most often denoted multiplicatively (just notation, not necessarily th ...
under composition.
Since
T_m(x) is divisible by
x if
m is odd, it follows that
T_(x) is divisible by
T_n(x) if
m is odd. Furthermore,
U_(x) is divisible by
U_(x), and in the case that
m is even, divisible by
T_n(x)U_(x).
Orthogonality
Both
T_n and
U_n form a sequence of
orthogonal polynomials
In mathematics, an orthogonal polynomial sequence is a family of polynomials such that any two different polynomials in the sequence are orthogonal
In mathematics, orthogonality (mathematics), orthogonality is the generalization of the geom ...
. The polynomials of the first kind
T_n are orthogonal with respect to the weight:
\frac,
on the interval , i.e. we have:
\int_^1 T_n(x)\,T_m(x)\,\frac =
\begin
0 & ~\text~ n \ne m, \\ mu\pi & ~\text~ n=m=0, \\ mu\frac & ~\text~ n=m \ne 0.
\end
This can be proven by letting
x=\cos(\theta) and using the defining identity
T_n(\cos(\theta)=\cos(n\theta).
Similarly, the polynomials of the second kind are orthogonal with respect to the weight:
\sqrt
on the interval , i.e. we have:
\int_^1 U_n(x)\,U_m(x)\,\sqrt \,\mathrmx =
\begin
0 & ~\text~ n \ne m, \\ mu\frac & ~\text~ n = m.
\end
(The measure
\sqrt\, dx is, to within a normalizing constant, the
Wigner semicircle distribution.)
These orthogonality properties follow from the fact that the Chebyshev polynomials solve the
Chebyshev differential equations:
\begin
(1 - x^2)T_n'' - xT_n' + n^2 T_n &= 0, \\ ex(1 - x^2)U_n'' - 3xU_n' + n(n + 2) U_n &= 0,
\end
which are
Sturm–Liouville differential equations. It is a general feature of such
differential equations that there is a distinguished orthonormal set of solutions. (Another way to define the Chebyshev polynomials is as the solutions to
those equations.)
The
T_n also satisfy a discrete orthogonality condition:
\sum_^ =
\begin
0 & ~\text~ i \ne j, \\ muN & ~\text~ i = j = 0, \\ mu\frac & ~\text~ i = j \ne 0,
\end
where
N is any integer greater than
\max(i,j), and the
x_k are the
N Chebyshev nodes (see above) of
T_N(x):
x_k = \cos\left(\pi\,\frac\right) \quad ~\text~ k = 0, 1, \dots, N-1.
For the polynomials of the second kind and any integer
N>i+j with the same Chebyshev nodes
x_k, there are similar sums:
\sum_^ =
\begin
0 & \text~ i \ne j, \\ mu\frac & \text~ i = j,
\end
and without the weight function:
\sum_^ =
\begin
0 & ~\text~ i \not\equiv j \pmod, \\ muN \cdot (1 + \min\) & ~\text~ i \equiv j\pmod.
\end
For any integer
N>i+j, based on the
N} zeros of
U_N(x):
y_k = \cos\left(\pi\,\frac\right) \quad ~\text~ k=0, 1, \dots, N-1,
one can get the sum:
\sum_^ =
\begin
0 & ~\text i \ne j, \\ mu\frac & ~\text i = j,
\end
and again without the weight function:
\sum_^ =
\begin
0 & ~\text~ i \not\equiv j \pmod, \\ mu\bigl(\min\ + 1\bigr)\bigl(N-\max\\bigr) & ~\text~ i \equiv j\pmod.
\end
Minimal -norm
For any given
n\geq 1, among the polynomials of degree
n with leading coefficient 1 (
monic polynomials):
f(x) = \frac\,T_n(x)
is the one of which the maximal absolute value on the interval is minimal.
This maximal absolute value is:
\frac1
and
, f(x), reaches this maximum exactly
n+1 times at:
x = \cos \frac\quad\text0 \le k \le n.
Remark
By the
equioscillation theorem, among all the polynomials of degree , the polynomial minimizes on
if and only if
In logic and related fields such as mathematics and philosophy, "if and only if" (often shortened as "iff") is paraphrased by the biconditional, a logical connective between statements. The biconditional is true in two cases, where either bo ...
there are points such that .
Of course, the null polynomial on the interval can be approximated by itself and minimizes the -norm.
Above, however, reaches its maximum only times because we are searching for the best polynomial of degree (therefore the theorem evoked previously cannot be used).
Chebyshev polynomials as special cases of more general polynomial families
The Chebyshev polynomials are a special case of the ultraspherical or
Gegenbauer polynomials C_n^(x), which themselves are a special case of the
Jacobi polynomials P_n^(x):
\begin
T_n(x) &= \frac \lim_ \frac\,C_n^(x) \qquad ~\text~ n \ge 1, \\
&= \frac P_n^(x) = \frac P_n^(x)~,
\\ exU_n(x) & = C_n^(x)\\
&= \frac P_n^(x) = \frac P_n^(x)~.
\end
Chebyshev polynomials are also a special case of
Dickson polynomials:
D_n(2x\alpha,\alpha^2)= 2\alpha^T_n(x) \,
E_n(2x\alpha,\alpha^2)= \alpha^U_n(x). \,
In particular, when
\alpha=\tfrac, they are related by
D_n(x,\tfrac) = 2^T_n(x) and
E_n(x,\tfrac) = 2^U_n(x).
Other properties
The curves given by , or equivalently, by the parametric equations , , are a special case of
Lissajous curves with frequency ratio equal to .
Similar to the formula:
T_n(\cos\theta) = \cos(n\theta),
we have the analogous formula:
T_(\sin\theta) = (-1)^n \sin\left(\left(2n+1\right)\theta\right).
For :
T_n\!\left(\frac\right) = \frac
and:
x^n = T_n\! \left(\frac\right)
+ \frac\ U_\!\left(\frac\right),
which follows from the fact that this holds by definition for .
There are relations between
Legendre polynomials and Chebyshev polynomials
\sum_^P_\left(x\right)T_\left(x\right) = \left(n+1\right)P_\left(x\right)
\sum_^P_\left(x\right)P_\left(x\right) = U_\left(x\right)
These identities can be proven using generating functions and discrete convolution
Chebyshev polynomials as determinants
From their definition by recurrence it follows that the Chebyshev polynomials can be obtained as
determinant
In mathematics, the determinant is a Scalar (mathematics), scalar-valued function (mathematics), function of the entries of a square matrix. The determinant of a matrix is commonly denoted , , or . Its value characterizes some properties of the ...
s of special
tridiagonal matrices of size
k \times k:
T_k(x) = \det
\begin
x & 1 & 0 & \cdots & 0 \\
1 & 2x & 1 & \ddots & \vdots \\
0 & 1 & 2x & \ddots & 0 \\
\vdots & \ddots & \ddots & \ddots & 1 \\
0 & \cdots & 0 & 1 & 2x
\end,
and similarly for
U_k.
Examples
First kind
The first few Chebyshev polynomials of the first kind are
\begin
T_0(x) &= 1 \\
T_1(x) &= x \\
T_2(x) &= 2x^2 - 1 \\
T_3(x) &= 4x^3 - 3x \\
T_4(x) &= 8x^4 - 8x^2 + 1 \\
T_5(x) &= 16x^5 - 20x^3 + 5x \\
T_6(x) &= 32x^6 - 48x^4 + 18x^2 - 1 \\
T_7(x) &= 64x^7 - 112x^5 + 56x^3 - 7x \\
T_8(x) &= 128x^8 - 256x^6 + 160x^4 - 32x^2 + 1 \\
T_9(x) &= 256x^9 - 576x^7 + 432x^5 - 120x^3 + 9x \\
T_(x) &= 512x^ - 1280x^8 + 1120x^6 - 400x^4 + 50x^2-1
\end
Second kind
The first few Chebyshev polynomials of the second kind are
\begin
U_0(x) &= 1 \\
U_1(x) &= 2x \\
U_2(x) &= 4x^2 - 1 \\
U_3(x) &= 8x^3 - 4x \\
U_4(x) &= 16x^4 - 12x^2 + 1 \\
U_5(x) &= 32x^5 - 32x^3 + 6x \\
U_6(x) &= 64x^6 - 80x^4 + 24x^2 - 1 \\
U_7(x) &= 128x^7 - 192x^5 + 80x^3 - 8x \\
U_8(x) &= 256x^8 - 448 x^6 + 240 x^4 - 40 x^2 + 1 \\
U_9(x) &= 512x^9 - 1024 x^7 + 672 x^5 - 160 x^3 + 10 x \\
U_(x) &= 1024x^ - 2304 x^8 + 1792 x^6 - 560 x^4 + 60 x^2-1
\end
As a basis set
In the appropriate
Sobolev space, the set of Chebyshev polynomials form an
orthonormal basis
In mathematics, particularly linear algebra, an orthonormal basis for an inner product space V with finite Dimension (linear algebra), dimension is a Basis (linear algebra), basis for V whose vectors are orthonormal, that is, they are all unit vec ...
, so that a function in the same space can, on , be expressed via the expansion:
f(x) = \sum_^\infty a_n T_n(x).
Furthermore, as mentioned previously, the Chebyshev polynomials form an
orthogonal
In mathematics, orthogonality (mathematics), orthogonality is the generalization of the geometric notion of ''perpendicularity''. Although many authors use the two terms ''perpendicular'' and ''orthogonal'' interchangeably, the term ''perpendic ...
basis which (among other things) implies that the coefficients can be determined easily through the application of an
inner product
In mathematics, an inner product space (or, rarely, a Hausdorff pre-Hilbert space) is a real vector space or a complex vector space with an operation called an inner product. The inner product of two vectors in the space is a scalar, ofte ...
. This sum is called a Chebyshev series or a Chebyshev expansion.
Since a Chebyshev series is related to a
Fourier cosine series through a change of variables, all of the theorems, identities, etc. that apply to
Fourier series
A Fourier series () is an Series expansion, expansion of a periodic function into a sum of trigonometric functions. The Fourier series is an example of a trigonometric series. By expressing a function as a sum of sines and cosines, many problems ...
have a Chebyshev counterpart.
[ These attributes include:
* The Chebyshev polynomials form a complete orthogonal system.
* The Chebyshev series converges to if the function is ]piecewise
In mathematics, a piecewise function (also called a piecewise-defined function, a hybrid function, or a function defined by cases) is a function whose domain is partitioned into several intervals ("subdomains") on which the function may be ...
smooth and continuous. The smoothness requirement can be relaxed in most cases as long as there are a finite number of discontinuities in and its derivatives.
* At a discontinuity, the series will converge to the average of the right and left limits.
The abundance of the theorems and identities inherited from Fourier series
A Fourier series () is an Series expansion, expansion of a periodic function into a sum of trigonometric functions. The Fourier series is an example of a trigonometric series. By expressing a function as a sum of sines and cosines, many problems ...
make the Chebyshev polynomials important tools in numeric analysis; for example they are the most popular general purpose basis functions used in the spectral method,[ often in favor of trigonometric series due to generally faster convergence for continuous functions ( Gibbs' phenomenon is still a problem).
The Chebfun software package supports function manipulation based on their expansion in the Chebysev basis.
]
Example 1
Consider the Chebyshev expansion of . One can express:
\log(1+x) = \sum_^\infty a_n T_n(x)~.
One can find the coefficients either through the application of an inner product or by the discrete orthogonality condition. For the inner product:
\int_^\,\frac\,\mathrmx = \sum_^a_n\int_^\frac\,\mathrmx,
which gives:
a_n = \begin
-\log 2 & \text~ n = 0, \\
\frac & \text~ n > 0.
\end
Alternatively, when the inner product of the function being approximated cannot be evaluated, the discrete orthogonality condition gives an often useful result for ''approximate'' coefficients:
a_n \approx \frac\,\sum_^T_n(x_k)\,\log(1+x_k),
where is the Kronecker delta
In mathematics, the Kronecker delta (named after Leopold Kronecker) is a function of two variables, usually just non-negative integers. The function is 1 if the variables are equal, and 0 otherwise:
\delta_ = \begin
0 &\text i \neq j, \\
1 &\ ...
function and the are the Gauss–Chebyshev zeros of :
x_k = \cos\left(\frac\right) .
For any , these approximate coefficients provide an exact approximation to the function at with a controlled error between those points. The exact coefficients are obtained with , thus representing the function exactly at all points in . The rate of convergence depends on the function and its smoothness.
This allows us to compute the approximate coefficients very efficiently through the discrete cosine transform
A discrete cosine transform (DCT) expresses a finite sequence of data points in terms of a sum of cosine functions oscillating at different frequency, frequencies. The DCT, first proposed by Nasir Ahmed (engineer), Nasir Ahmed in 1972, is a widely ...
:
a_n \approx \frac\sum_^\cos\left(\frac\right)\log(1+x_k).
Example 2
To provide another example:
\begin
\left(1-x^2\right)^\alpha &= -\frac \, \frac + 2^\,\sum_ \left(-1\right)^n \, \,T_(x) \\ ex &= 2^\,\sum_ \left(-1\right)^n \, \,U_(x).
\end
Partial sums
The partial sums of:
f(x) = \sum_^\infty a_n T_n(x)
are very useful in the approximation of various functions and in the solution of differential equations (see spectral method). Two common methods for determining the coefficients are through the use of the inner product
In mathematics, an inner product space (or, rarely, a Hausdorff pre-Hilbert space) is a real vector space or a complex vector space with an operation called an inner product. The inner product of two vectors in the space is a scalar, ofte ...
as in Galerkin's method and through the use of collocation
In corpus linguistics, a collocation is a series of words or terms that co-occur more often than would be expected by chance. In phraseology, a collocation is a type of compositional phraseme, meaning that it can be understood from the words t ...
which is related to interpolation
In the mathematics, mathematical field of numerical analysis, interpolation is a type of estimation, a method of constructing (finding) new data points based on the range of a discrete set of known data points.
In engineering and science, one ...
.
As an interpolant, the coefficients of the st partial sum are usually obtained on the Chebyshev–Gauss–Lobatto points (or Lobatto grid), which results in minimum error and avoids Runge's phenomenon associated with a uniform grid. This collection of points corresponds to the extrema of the highest order polynomial in the sum, plus the endpoints and is given by:
x_k = -\cos\left(\frac\right); \qquad k = 0, 1, \dots, N - 1.
Polynomial in Chebyshev form
An arbitrary polynomial of degree can be written in terms of the Chebyshev polynomials of the first kind. Such a polynomial is of the form:
p(x) = \sum_^N a_n T_n(x).
Polynomials in Chebyshev form can be evaluated using the Clenshaw algorithm.
Families of polynomials related to Chebyshev polynomials
Polynomials denoted C_n(x) and S_n(x) closely related to Chebyshev polynomials are sometimes used. They are defined by:
C_n(x) = 2T_n\left(\frac\right),\qquad S_n(x) = U_n\left(\frac\right)
and satisfy:
C_n(x) = S_n(x) - S_(x).
A. F. Horadam called the polynomials C_n(x) Vieta–Lucas polynomials and denoted them v_n(x). He called the polynomials
S_n(x) Vieta–Fibonacci polynomials and denoted them Lists of both sets of polynomials are given in Viète's ''Opera Mathematica'', Chapter IX, Theorems VI and VII. The Vieta–Lucas and Vieta–Fibonacci polynomials of real argument are, up to a power of i and a shift of index in the case of the latter, equal to Lucas and Fibonacci polynomials and of imaginary argument.
Shifted Chebyshev polynomials of the first and second kinds are related to the Chebyshev polynomials by:
T_n^*(x) = T_n(2x-1),\qquad U_n^*(x) = U_n(2x-1).
When the argument of the Chebyshev polynomial satisfies the argument of the shifted Chebyshev polynomial satisfies . Similarly, one can define shifted polynomials for generic intervals .
Around 1990 the terms "third-kind" and "fourth-kind" came into use in connection with Chebyshev polynomials, although the polynomials denoted by these terms had an earlier development under the name airfoil polynomials. According to J. C. Mason and G. H. Elliott, the terminology "third-kind" and "fourth-kind" is due to Walter Gautschi, "in consultation with colleagues in the field of orthogonal polynomials." The Chebyshev polynomials of the third kind are defined as:
V_n(x)=\frac=\sqrt\fracT_\left(\sqrt\frac\right)
and the Chebyshev polynomials of the fourth kind are defined as:
W_n(x)=\frac=U_\left(\sqrt\frac\right),
where \theta=\arccos x.
They coincide with the Dirichlet kernel.
In the airfoil literature V_n(x) and W_n(x) are denoted t_n(x) and u_n(x). The polynomial families T_n(x), U_n(x), V_n(x), and W_n(x) are orthogonal with respect to the weights:
\left(1-x^2\right)^,\quad\left(1-x^2\right)^,\quad(1-x)^(1+x)^,\quad(1+x)^(1-x)^
and are proportional to Jacobi polynomials P_n^(x) with:[
(\alpha,\beta)=\left(-\frac,-\frac\right),\quad(\alpha,\beta)=\left(\frac,\frac\right),\quad(\alpha,\beta)=\left(-\frac,\frac\right),\quad(\alpha,\beta)=\left(\frac,-\frac\right).
All four families satisfy the recurrence p_n(x)=2xp_(x)-p_(x) with p_0(x) = 1, where p_n = T_n, U_n, V_n, or W_n, but they differ according to whether p_1(x) equals x, 2x, 2x-1, or ][
]
Even order modified Chebyshev polynomials
Some applications rely on Chebyshev polynomials but may be unable to accommodate the lack of a root at zero, which rules out the use of standard Chebyshev polynomials for these kinds of applications. Even order Chebyshev filter designs using equally terminated passive networks are an example of this. However, even order Chebyshev polynomials may be modified to move the lowest roots down to zero while still maintaining the desirable Chebyshev equi-ripple effect. Such modified polynomials contain two roots at zero, and may be referred to as even order modified Chebyshev polynomials. Even order modified Chebyshev polynomials may be created from the Chebyshev nodes in the same manner as standard Chebyshev polynomials.
P_N = \prod_^N(x-C_i)
where
* P_N is an ''N''-th order Chebyshev polynomial
* C_i is the ''i''-th Chebyshev node
In the case of even order modified Chebyshev polynomials, the even order modified Chebyshev nodes are used to construct the even order modified Chebyshev polynomials.
Pe_N = \prod_^N(x-Ce_i)
where
* P e_N is an ''N''-th order even order modified Chebyshev polynomial
* Ce_i is the ''i''-th even order modified Chebyshev node
For example, the 4th order Chebyshev polynomial from the example above is X^4-X^2+.125
, which by inspection contains no roots of zero. Creating the polynomial from the even order modified Chebyshev nodes creates a 4th order even order modified Chebyshev polynomial of X^4-.828427X^2
, which by inspection contains two roots at zero, and may be used in applications requiring roots at zero.
See also
* Chebyshev rational functions
*Function approximation
In general, a function approximation problem asks us to select a function (mathematics), function among a that closely matches ("approximates") a in a task-specific way. The need for function approximations arises in many branches of applied ...
* Discrete Chebyshev transform
* Markov brothers' inequality
References
Sources
* Reprint: 1983. New York: Dover. .
* Reprint: 1981. Melbourne, FL: Krieger. .
*
Further reading
*
*
*
*
*
*
*
*
*
*
*
External links
*
*
*
*
*
*
{{Authority control
Special hypergeometric functions
Orthogonal polynomials
Polynomials
Approximation theory