HOME

TheInfoList



OR:

In
mathematics Mathematics is a field of study that discovers and organizes methods, Mathematical theory, theories and theorems that are developed and Mathematical proof, proved for the needs of empirical sciences and mathematics itself. There are many ar ...
, divided differences is an
algorithm In mathematics and computer science, an algorithm () is a finite sequence of Rigour#Mathematics, mathematically rigorous instructions, typically used to solve a class of specific Computational problem, problems or to perform a computation. Algo ...
, historically used for computing tables of
logarithms In mathematics, the logarithm of a number is the exponent by which another fixed value, the base, must be raised to produce that number. For example, the logarithm of to base is , because is to the rd power: . More generally, if , the ...
and
trigonometric functions In mathematics, the trigonometric functions (also called circular functions, angle functions or goniometric functions) are real functions which relate an angle of a right-angled triangle to ratios of two side lengths. They are widely used in all ...
.
Charles Babbage Charles Babbage (; 26 December 1791 – 18 October 1871) was an English polymath. A mathematician, philosopher, inventor and mechanical engineer, Babbage originated the concept of a digital programmable computer. Babbage is considered ...
's difference engine, an early mechanical calculator, was designed to use this algorithm in its operation. Divided differences is a recursive division process. Given a sequence of data points (x_0, y_0), \ldots, (x_, y_), the method calculates the coefficients of the interpolation polynomial of these points in the Newton form. It is sometimes denoted by a delta with a bar: \text \!\!\!, \,\, or \text \! \text.


Definition

Given ''n'' + 1 data points (x_0, y_0),\ldots,(x_, y_) where the x_k are assumed to be pairwise distinct, the forward divided differences are defined as: \begin \mathopen _k&:= y_k, && k \in \ \\ \mathopen _k,\ldots,y_&:= \frac, && k\in\,\ j\in\. \end To make the recursive process of computation clearer, the divided differences can be put in tabular form, where the columns correspond to the value of ''j'' above, and each entry in the table is computed from the difference of the entries to its immediate lower left and to its immediate upper left, divided by a difference of corresponding ''x-''values: \begin x_0 & y_0 = _0& & & \\ & & _0,y_1& & \\ x_1 & y_1 = _1& & _0,y_1,y_2& \\ & & _1,y_2& & _0,y_1,y_2,y_3\ x_2 & y_2 = _2& & _1,y_2,y_3& \\ & & _2,y_3& & \\ x_3 & y_3 = _3& & & \\ \end


Notation

Note that the divided difference _k,\ldots,y_/math> depends on the values x_k,\ldots,x_ and y_k,\ldots,y_, but the notation hides the dependency on the ''x''-values. If the data points are given by a function ''f'', (x_0, y_0), \ldots, (x_, y_n) =(x_0, f(x_0)), \ldots, (x_n, f(x_n)) one sometimes writes the divided difference in the notation f _k,\ldots,x_\ \stackrel= \ (x_k),\ldots,f(x_) = _k,\ldots,y_Other notations for the divided difference of the function ''ƒ'' on the nodes ''x''0, ..., ''x''''n'' are: f _k,\ldots,x_\mathopen _0,\ldots,x_n= \mathopen _0,\ldots,x_n;f D _0,\ldots,x_n.


Example

Divided differences for k=0 and the first few values of j: \begin \mathopen _0&= y_0 \\ \mathopen _0,y_1&= \frac \\ \mathopen _0,y_1,y_2&= \frac = \frac = \frac-\frac \\ \mathopen _0,y_1,y_2,y_3&= \frac \end Thus, the table corresponding to these terms upto two columns has the following form: \begin x_0 & y_ & & \\ & & & \\ x_1 & y_ & & \\ & & & \\ x_2 & y_ & & \vdots \\ & & \vdots & \\ \vdots & & & \vdots \\ & & \vdots & \\ x_n & y_ & & \\ \end


Properties

*
Linearity In mathematics, the term ''linear'' is used in two distinct senses for two different properties: * linearity of a '' function'' (or '' mapping''); * linearity of a '' polynomial''. An example of a linear function is the function defined by f(x) ...
\begin (f+g) _0,\dots,x_n&= f _0,\dots,x_n+ g _0,\dots,x_n\\ (\lambda\cdot f) _0,\dots,x_n&= \lambda\cdot f _0,\dots,x_n\end * Leibniz rule (f\cdot g) _0,\dots,x_n= f _0cdot g _0,\dots,x_n+ f _0,x_1cdot g _1,\dots,x_n+ \dots + f _0,\dots,x_ncdot g _n= \sum_^n f _0,\ldots,x_rcdot g _r,\ldots,x_n/math> * Divided differences are symmetric: If \sigma : \ \to \ is a permutation then f _0, \dots, x_n= f _, \dots, x_/math> * Polynomial interpolation in the Newton form: if P is a polynomial function of degree \leq n, and p _0, \dots , x_n/math> is the divided difference, then P_(x) = p _0+ p _0,x_1x-x_0) + p _0,x_1,x_2x-x_0)(x-x_1) + \cdots + p _0,\ldots,x_n(x-x_0)(x-x_1)\cdots(x-x_) * If p is a polynomial function of degree , then p _0, \dots, x_n= 0. * Mean value theorem for divided differences: if f is ''n'' times differentiable, then f _0,\dots,x_n= \frac for a number \xi in the open interval determined by the smallest and largest of the x_k's.


Matrix form

The divided difference scheme can be put into an upper
triangular matrix In mathematics, a triangular matrix is a special kind of square matrix. A square matrix is called if all the entries ''above'' the main diagonal are zero. Similarly, a square matrix is called if all the entries ''below'' the main diagonal are z ...
: T_f(x_0,\dots,x_n)= \begin f _0& f _0,x_1& f _0,x_1,x_2& \ldots & f _0,\dots,x_n\\ 0 & f _1 & f _1,x_2 & \ldots & f _1,\dots,x_n\\ 0 & 0 & f _2 & \ldots & f _2,\dots,x_n\\ \vdots & \vdots & & \ddots & \vdots \\ 0 & 0 & 0 & \ldots & f _n\end. Then it holds * T_(x) = T_f(x) + T_g(x) * T_(x) = \lambda T_f(x) if \lambda is a scalar * T_(x) = T_f(x) \cdot T_g(x) This follows from the Leibniz rule. It means that multiplication of such matrices is
commutative In mathematics, a binary operation is commutative if changing the order of the operands does not change the result. It is a fundamental property of many binary operations, and many mathematical proofs depend on it. Perhaps most familiar as a pr ...
. Summarised, the matrices of divided difference schemes with respect to the same set of nodes ''x'' form a
commutative ring In mathematics, a commutative ring is a Ring (mathematics), ring in which the multiplication operation is commutative. The study of commutative rings is called commutative algebra. Complementarily, noncommutative algebra is the study of ring prope ...
. * Since T_f(x) is a triangular matrix, its
eigenvalue In linear algebra, an eigenvector ( ) or characteristic vector is a vector that has its direction unchanged (or reversed) by a given linear transformation. More precisely, an eigenvector \mathbf v of a linear transformation T is scaled by a ...
s are obviously f(x_0), \dots, f(x_n) . * Let \delta_\xi be a
Kronecker delta In mathematics, the Kronecker delta (named after Leopold Kronecker) is a function of two variables, usually just non-negative integers. The function is 1 if the variables are equal, and 0 otherwise: \delta_ = \begin 0 &\text i \neq j, \\ 1 &\ ...
-like function, that is \delta_\xi(t) = \begin 1 &: t=\xi , \\ 0 &: \mbox. \end Obviously f\cdot \delta_\xi = f(\xi)\cdot \delta_\xi, thus \delta_\xi is an eigenfunction of the pointwise function multiplication. That is T_(x) is somehow an " eigenmatrix" of T_f(x): T_f(x) \cdot T_ (x) = f(x_i) \cdot T_(x) . However, all columns of T_(x) are multiples of each other, the matrix rank of T_(x) is 1. So you can compose the matrix of all eigenvectors of T_f(x) from the i-th column of each T_(x). Denote the matrix of eigenvectors with U(x). Example U(x_0,x_1,x_2,x_3) = \begin 1 & \frac & \frac & \frac \\ 0 & 1 & \frac & \frac \\ 0 & 0 & 1 & \frac \\ 0 & 0 & 0 & 1 \end The diagonalization of T_f(x) can be written as U(x) \cdot \operatorname(f(x_0),\dots,f(x_n)) = T_f(x) \cdot U(x) .


Polynomials and power series

The matrix J = \begin x_0 & 1 & 0 & 0 & \cdots & 0 \\ 0 & x_1 & 1 & 0 & \cdots & 0 \\ 0 & 0 & x_2 & 1 & & 0 \\ \vdots & \vdots & & \ddots & \ddots & \\ 0 & 0 & 0 & 0 & \; \ddots & 1\\ 0 & 0 & 0 & 0 & & x_n \end contains the divided difference scheme for the
identity function Graph of the identity function on the real numbers In mathematics, an identity function, also called an identity relation, identity map or identity transformation, is a function that always returns the value that was used as its argument, unc ...
with respect to the nodes x_0,\dots,x_n, thus J^m contains the divided differences for the power function with exponent m. Consequently, you can obtain the divided differences for a polynomial function p by applying p to the matrix J: If p(\xi) = a_0 + a_1 \cdot \xi + \dots + a_m \cdot \xi^m and p(J) = a_0 + a_1\cdot J + \dots + a_m\cdot J^m then T_p(x) = p(J). This is known as ''Opitz' formula''. Now consider increasing the degree of p to infinity, i.e. turn the Taylor polynomial into a
Taylor series In mathematics, the Taylor series or Taylor expansion of a function is an infinite sum of terms that are expressed in terms of the function's derivatives at a single point. For most common functions, the function and the sum of its Taylor ser ...
. Let f be a function which corresponds to a
power series In mathematics, a power series (in one variable) is an infinite series of the form \sum_^\infty a_n \left(x - c\right)^n = a_0 + a_1 (x - c) + a_2 (x - c)^2 + \dots where ''a_n'' represents the coefficient of the ''n''th term and ''c'' is a co ...
. You can compute the divided difference scheme for f by applying the corresponding matrix series to J: If f(\xi) = \sum_^\infty a_k \xi^k and f(J)=\sum_^\infty a_k J^k then T_f(x)=f(J).


Alternative characterizations


Expanded form

\begin f _0&= f(x_0) \\ f _0,x_1&= \frac + \frac \\ f _0,x_1,x_2&= \frac + \frac + \frac \\ f _0,x_1,x_2,x_3&= \frac + \frac +\\ &\quad\quad \frac + \frac \\ f _0,\dots,x_n&= \sum_^ \frac \end With the help of the polynomial function \omega(\xi) = (\xi-x_0) \cdots (\xi-x_n) this can be written as f _0,\dots,x_n= \sum_^ \frac.


Peano form

If x_0 and n\geq 1, the divided differences can be expressed as f _0,\ldots,x_n= \frac \int_^ f^(t)\;B_(t) \, dt where f^ is the n-th
derivative In mathematics, the derivative is a fundamental tool that quantifies the sensitivity to change of a function's output with respect to its input. The derivative of a function of a single variable at a chosen input value, when it exists, is t ...
of the function f and B_ is a certain
B-spline In numerical analysis, a B-spline (short for basis spline) is a type of Spline (mathematics), spline function designed to have minimal Support (mathematics), support (overlap) for a given Degree of a polynomial, degree, smoothness, and set of bre ...
of degree n-1 for the data points x_0,\dots,x_n, given by the formula B_(t) = \sum_^n \frac This is a consequence of the Peano kernel theorem; it is called the ''Peano form'' of the divided differences and B_ is the ''Peano kernel'' for the divided differences, all named after
Giuseppe Peano Giuseppe Peano (; ; 27 August 1858 – 20 April 1932) was an Italian mathematician and glottologist. The author of over 200 books and papers, he was a founder of mathematical logic and set theory, to which he contributed much Mathematical notati ...
.


Forward and backward differences

When the data points are equidistantly distributed we get the special case called forward differences. They are easier to calculate than the more general divided differences. Given ''n''+1 data points (x_0, y_0), \ldots, (x_n, y_n) with x_ = x_0 + k h,\ \text \ k=0,\ldots,n \text h>0 the forward differences are defined as \begin \Delta^ y_k &:= y_k,\qquad k=0,\ldots,n \\ \Delta^y_k &:= \Delta^y_ - \Delta^y_k,\qquad k=0,\ldots,n-j,\ j=1,\dots,n. \endwhereas the backward differences are defined as: \begin \nabla^ y_k &:= y_k,\qquad k=0,\ldots,n \\ \nabla^y_k &:= \nabla^y_ - \nabla^y_,\qquad k=0,\ldots,n-j,\ j=1,\dots,n. \end Thus the forward difference table is written as: \begin y_0 & & & \\ & \Delta y_0 & & \\ y_1 & & \Delta^2 y_0 & \\ & \Delta y_1 & & \Delta^3 y_0\\ y_2 & & \Delta^2 y_1 & \\ & \Delta y_2 & & \\ y_3 & & & \\ \end whereas the backwards difference table is written as: \begin y_0 & & & \\ & \nabla y_1 & & \\ y_1 & & \nabla^2 y_2 & \\ & \nabla y_2 & & \nabla^3 y_3\\ y_2 & & \nabla^2 y_3 & \\ & \nabla y_3 & & \\ y_3 & & & \\ \end The relationship between divided differences and forward differences is _j, y_, \ldots , y_= \frac\Delta^y_j, whereas for backward differences: , y_,\ldots,_= \frac\nabla^y_j.


See also

*
Difference quotient In single-variable calculus, the difference quotient is usually the name for the expression : \frac which when taken to the Limit of a function, limit as ''h'' approaches 0 gives the derivative of the Function (mathematics), function ''f''. The ...
* Neville's algorithm * Polynomial interpolation * Mean value theorem for divided differences * Nörlund–Rice integral *
Pascal's triangle In mathematics, Pascal's triangle is an infinite triangular array of the binomial coefficients which play a crucial role in probability theory, combinatorics, and algebra. In much of the Western world, it is named after the French mathematician Bla ...


References

* * * {{cite book, author=Ron Goldman, title=Pyramid Algorithms: A Dynamic Programming Approach to Curves and Surfaces for Geometric Modeling, year=2002, publisher=Morgan Kaufmann, isbn=978-0-08-051547-2, at=Chapter 4:Newton Interpolation and Difference Triangles


External links

* A
implementation
in
Haskell Haskell () is a general-purpose, statically typed, purely functional programming language with type inference and lazy evaluation. Designed for teaching, research, and industrial applications, Haskell pioneered several programming language ...
. Finite differences de:Polynominterpolation#Bestimmung der Koeffizienten: Schema der dividierten Differenzen