HOME

TheInfoList



OR:

In
numerical analysis Numerical analysis is the study of algorithms that use numerical approximation (as opposed to symbolic computation, symbolic manipulations) for the problems of mathematical analysis (as distinguished from discrete mathematics). It is the study of ...
, the Lagrange interpolating polynomial is the unique
polynomial In mathematics, a polynomial is a Expression (mathematics), mathematical expression consisting of indeterminate (variable), indeterminates (also called variable (mathematics), variables) and coefficients, that involves only the operations of addit ...
of lowest degree that interpolates a given set of data. Given a data set of coordinate pairs (x_j, y_j) with 0 \leq j \leq k, the x_j are called ''nodes'' and the y_j are called ''values''. The Lagrange polynomial L(x) has degree \leq k and assumes each value at the corresponding node, L(x_j) = y_j. Although named after
Joseph-Louis Lagrange Joseph-Louis Lagrange (born Giuseppe Luigi LagrangiaEdward Waring. It is also an easy consequence of a formula published in 1783 by
Leonhard Euler Leonhard Euler ( ; ; ; 15 April 170718 September 1783) was a Swiss polymath who was active as a mathematician, physicist, astronomer, logician, geographer, and engineer. He founded the studies of graph theory and topology and made influential ...
. Uses of Lagrange polynomials include the Newton–Cotes method of
numerical integration In analysis, numerical integration comprises a broad family of algorithms for calculating the numerical value of a definite integral. The term numerical quadrature (often abbreviated to quadrature) is more or less a synonym for "numerical integr ...
, Shamir's secret sharing scheme in
cryptography Cryptography, or cryptology (from "hidden, secret"; and ''graphein'', "to write", or ''-logy, -logia'', "study", respectively), is the practice and study of techniques for secure communication in the presence of Adversary (cryptography), ...
, and Reed–Solomon error correction in
coding theory Coding theory is the study of the properties of codes and their respective fitness for specific applications. Codes are used for data compression, cryptography, error detection and correction, data transmission and computer data storage, data sto ...
. For equispaced nodes, Lagrange interpolation is susceptible to Runge's phenomenon of large oscillation.


Definition

Given a set of k + 1 nodes \, which must all be distinct, x_j \neq x_m for indices j \neq m, the Lagrange basis for polynomials of degree \leq k for those nodes is the set of polynomials \ each of degree k which take values \ell_j(x_m) = 0 if m \neq j and \ell_j(x_j) = 1. Using the
Kronecker delta In mathematics, the Kronecker delta (named after Leopold Kronecker) is a function of two variables, usually just non-negative integers. The function is 1 if the variables are equal, and 0 otherwise: \delta_ = \begin 0 &\text i \neq j, \\ 1 &\ ...
this can be written \ell_j(x_m) = \delta_. Each basis polynomial can be explicitly described by the product: \begin \ell_j(x) &= \frac \cdots \frac \frac \cdots \frac \\ mu&= \prod_ \frac \vphantom\Bigg, . \end Notice that the numerator \prod_(x - x_m) has k roots at the nodes \_ while the denominator \prod_(x_j - x_m) scales the resulting polynomial so that \ell_j(x_j) = 1. The Lagrange interpolating polynomial for those nodes through the corresponding ''values'' \ is the linear combination: L(x) = \sum_^ y_j \ell_j(x). Each basis polynomial has degree k, so the sum L(x) has degree \leq k, and it interpolates the data because L(x_m) = \sum_^ y_j \ell_j(x_m) = \sum_^ y_j \delta_ = y_m. The interpolating polynomial is unique. Proof: assume the polynomial M(x) of degree \leq k interpolates the data. Then the difference M(x) - L(x) is zero at k + 1 distinct nodes \. But the only polynomial of degree \leq k with more than k roots is the constant zero function, so M(x) - L(x) = 0, or M(x) = L(x).


Barycentric form

Each Lagrange basis polynomial \ell_j(x) can be rewritten as the product of three parts, a function \ell(x) = \prod_m (x - x_m) common to every basis polynomial, a node-specific constant w_j = \prod_(x_j - x_m)^ (called the ''barycentric weight''), and a part representing the displacement from x_j to x: \ell_j(x) = \ell(x) \dfrac By factoring \ell(x) out from the sum, we can write the Lagrange polynomial in the so-called ''first barycentric form'': :L(x) = \ell(x) \sum_^k \fracy_j. If the weights w_j have been pre-computed, this requires only \mathcal O(k) operations compared to \mathcal O(k^2) for evaluating each Lagrange basis polynomial \ell_j(x) individually. The barycentric interpolation formula can also easily be updated to incorporate a new node x_ by dividing each of the w_j, j=0 \dots k by (x_j - x_) and constructing the new w_ as above. For any x, \sum_^k \ell_j(x) = 1 because the constant function g(x) = 1 is the unique polynomial of degree \leq k interpolating the data \. We can thus further simplify the barycentric formula by dividing L(x) = L(x) / g(x)\colon :\begin L(x) &= \ell(x) \sum_^k \fracy_j \Bigg/ \ell(x) \sum_^k \frac \\ 0mu&= \sum_^k \fracy_j \Bigg/ \sum_^k \frac. \end This is called the ''second form'' or ''true form'' of the barycentric interpolation formula. This second form has advantages in computation cost and accuracy: it avoids evaluation of \ell(x); the work to compute each term in the denominator w_j/(x-x_j) has already been done in computing \bigl(w_j/(x-x_j)\bigr)y_j and so computing the sum in the denominator costs only k addition operations; for evaluation points x which are close to one of the nodes x_j, catastrophic cancelation would ordinarily be a problem for the value (x-x_j), however this quantity appears in both numerator and denominator and the two cancel leaving good relative accuracy in the final result. Using this formula to evaluate L(x) at one of the nodes x_j will result in the indeterminate \infty y_j/\infty; computer implementations must replace such results by L(x_j) = y_j. Each Lagrange basis polynomial can also be written in barycentric form: : \ell_j(x) = \frac \Bigg/ \sum_^k \frac.


A perspective from linear algebra

Solving an interpolation problem leads to a problem in
linear algebra Linear algebra is the branch of mathematics concerning linear equations such as :a_1x_1+\cdots +a_nx_n=b, linear maps such as :(x_1, \ldots, x_n) \mapsto a_1x_1+\cdots +a_nx_n, and their representations in vector spaces and through matrix (mathemat ...
amounting to inversion of a matrix. Using a standard monomial basis for our interpolation polynomial L(x) = \sum_^k x^j m_j, we must invert the Vandermonde matrix (x_i)^j to solve L(x_i) = y_i for the coefficients m_j of L(x). By choosing a better basis, the Lagrange basis, L(x) = \sum_^k l_j(x) y_j, we merely get the
identity matrix In linear algebra, the identity matrix of size n is the n\times n square matrix with ones on the main diagonal and zeros elsewhere. It has unique properties, for example when the identity matrix represents a geometric transformation, the obje ...
, \delta_, which is its own inverse: the Lagrange basis automatically ''inverts'' the analog of the Vandermonde matrix. This construction is analogous to the
Chinese remainder theorem In mathematics, the Chinese remainder theorem states that if one knows the remainders of the Euclidean division of an integer ''n'' by several integers, then one can determine uniquely the remainder of the division of ''n'' by the product of thes ...
. Instead of checking for remainders of integers modulo prime numbers, we are checking for remainders of polynomials when divided by linears. Furthermore, when the order is large,
Fast Fourier transform A fast Fourier transform (FFT) is an algorithm that computes the discrete Fourier transform (DFT) of a sequence, or its inverse (IDFT). A Fourier transform converts a signal from its original domain (often time or space) to a representation in ...
ation can be used to solve for the coefficients of the interpolated polynomial.


Example

We wish to interpolate f(x) = x^2 over the domain 1 \leq x \leq 3 at the three nodes : \begin x_0 & = 1, & & & y_0 = f(x_0) & = 1, \\ mux_1 & = 2, & & & y_1 = f(x_1) & = 4, \\ mux_2 & = 3, & & & y_2 = f(x_2) & =9. \end The node polynomial \ell is :\ell(x) = (x-1)(x-2)(x-3) = x^3 - 6x^2 + 11x - 6. The barycentric weights are :\begin w_0 &= (1-2)^(1-3)^ = \tfrac12, \\ muw_1 &= (2-1)^(2-3)^ = -1, \\ muw_2 &= (3-1)^(3-2)^ = \tfrac12. \end The Lagrange basis polynomials are :\begin \ell_0(x) &= \frac\cdot\frac = \tfrac12x^2 - \tfrac52x + 3, \\ mu\ell_1(x) &= \frac\cdot\frac = -x^2 + 4x - 3, \\ mu\ell_2(x) &= \frac\cdot\frac = \tfrac12x^2 - \tfrac32x + 1. \end The Lagrange interpolating polynomial is: : \begin L(x) &= y_0 \cdot\ell_0(x) + y_1\cdot\ell_1(x) + y_2\cdot\ell_2(x) = x^2. \end In (second) barycentric form, : L(x) = \frac = \frac .


Notes

The Lagrange form of the interpolation polynomial shows the linear character of polynomial interpolation and the uniqueness of the interpolation polynomial. Therefore, it is preferred in proofs and theoretical arguments. Uniqueness can also be seen from the invertibility of the Vandermonde matrix, due to the non-vanishing of the Vandermonde determinant. But, as can be seen from the construction, each time a node ''x''''k'' changes, all Lagrange basis polynomials have to be recalculated. A better form of the interpolation polynomial for practical (or computational) purposes is the barycentric form of the Lagrange interpolation (see below) or Newton polynomials. Lagrange and other interpolation at equally spaced points, as in the example above, yield a polynomial oscillating above and below the true function. This behaviour tends to grow with the number of points, leading to a divergence known as Runge's phenomenon; the problem may be eliminated by choosing interpolation points at Chebyshev nodes. The Lagrange basis polynomials can be used in
numerical integration In analysis, numerical integration comprises a broad family of algorithms for calculating the numerical value of a definite integral. The term numerical quadrature (often abbreviated to quadrature) is more or less a synonym for "numerical integr ...
to derive the Newton–Cotes formulas.


Remainder in Lagrange interpolation formula

When interpolating a given function ''f'' by a polynomial of degree at the nodes x_0,...,x_k we get the remainder R(x) = f(x) - L(x) which can be expressed as : R(x) = f _0,\ldots,x_k,x\ell(x) = \ell(x) \frac, \quad \quad x_0 < \xi < x_k, where f _0,\ldots,x_k,x/math> is the notation for divided differences. Alternatively, the remainder can be expressed as a contour integral in complex domain as :R(x) = \frac \int_C \frac dt = \frac \int_C \frac dt. The remainder can be bound as :, R(x), \leq \frac\max_ , f^(\xi), .


Derivation

Clearly, R(x) is zero at nodes. To find R(x) at a point x_p , define a new function F(x)=R(x)-\tilde(x)=f(x)-L(x)-\tilde(x) and choose \tilde(x)=C\cdot\prod_^k(x-x_i) where C is the constant we are required to determine for a given x_p. We choose C so that F(x) has k+2 zeroes (at all nodes and x_p) between x_0 and x_k (including endpoints). Assuming that f(x) is k+1-times differentiable, since L(x) and \tilde(x) are polynomials, and therefore, are infinitely differentiable, F(x) will be k+1-times differentiable. By Rolle's theorem, F^(x) has k+1 zeroes, F^(x) has k zeroes... F^ has 1 zero, say \xi,\, x_0<\xi. Explicitly writing F^(\xi): :F^(\xi)=f^(\xi)-L^(\xi)-\tilde^(\xi) :L^=0,\tilde^=C\cdot(k+1)! (Because the highest power of x in \tilde(x) is k+1) :0=f^(\xi)-C\cdot(k+1)! The equation can be rearranged as :C=\frac Since F(x_p) = 0 we have R(x_p)=\tilde(x_p) = \frac\prod_^k(x_p-x_i)


Derivatives

The th derivative of a Lagrange interpolating polynomial can be written in terms of the derivatives of the basis polynomials, :L^(x) := \sum_^ y_j \ell_j^(x). Recall (see above) that each Lagrange basis polynomial is \begin \ell_j(x) &= \prod_^k \frac. \end The first derivative can be found using the product rule: :\begin \ell_j'(x) &= \sum_^k \Biggl \frac\prod_^k \frac \Biggr\\ mu&= \ell_j(x)\sum_^k \frac. \end The second derivative is :\begin \ell_j''(x) &= \sum_^ \frac \Biggl \sum_^ \Biggl( \frac\prod_^ \frac \Biggr) \Biggr\\ 0mu&= \ell_j(x) \sum_ \frac \\ 0mu&= \ell_j(x)\Biggl Biggl(\sum_^k \frac\Biggr)^2-\sum_^k \frac\Biggr \end The third derivative is :\begin \ell_j(x) &= \ell_j(x) \sum_ \frac \end and likewise for higher derivatives. Note that all of these formulas for derivatives are invalid at or near a node. A method of evaluating all orders of derivatives of a Lagrange polynomial efficiently at all points of the domain, including the nodes, is converting the Lagrange polynomial to power basis form and then evaluating the derivatives.


Finite fields

The Lagrange polynomial can also be computed in
finite field In mathematics, a finite field or Galois field (so-named in honor of Évariste Galois) is a field (mathematics), field that contains a finite number of Element (mathematics), elements. As with any field, a finite field is a Set (mathematics), s ...
s. This has applications in
cryptography Cryptography, or cryptology (from "hidden, secret"; and ''graphein'', "to write", or ''-logy, -logia'', "study", respectively), is the practice and study of techniques for secure communication in the presence of Adversary (cryptography), ...
, such as in Shamir's Secret Sharing scheme.


See also

* Neville's algorithm * Newton form of the interpolation polynomial *
Bernstein polynomial In the mathematics, mathematical field of numerical analysis, a Bernstein polynomial is a polynomial expressed as a linear combination of #Bernstein basis polynomials, Bernstein basis polynomials. The idea is named after mathematician Sergei Nata ...
* Carlson's theorem * Lebesgue constant * The Chebfun system *
Table of Newtonian series In mathematics, a Newtonian series, named after Isaac Newton, is a sum over a sequence a_n written in the form :f(s) = \sum_^\infty (-1)^n a_n = \sum_^\infty \frac a_n where : is the binomial coefficient and (s)_n is the rising factorial, fall ...
* Frobenius covariant * Sylvester's formula * Finite difference coefficient * Hermite interpolation


References


External links

*
ALGLIB
has an implementations in C++ / C# / VBA / Pascal.
GSL
has a polynomial interpolation code in C
SO
has a MATLAB example that demonstrates the algorithm and recreates the first image in this article
Lagrange Method of Interpolation — Notes, PPT, Mathcad, Mathematica, MATLAB, MapleLagrange interpolation polynomial
on www.math-linux.com *
Excel Worksheet Function for Bicubic Lagrange Interpolation

Lagrange polynomials in Python
{{authority control Interpolation Polynomials Articles containing proofs