In the

_{''n''}(''x'') for ''n''=0, 1, 2, …, each of degree at most ''n'', that approximates ''f''(''x'') with _{''n''}(''x'') that passes through those points. Naturally, one might expect from Weierstrass' theorem that using more points would lead to a more accurate reconstruction of ''f''(''x''). However, this ''particular'' set of polynomial functions ''P''_{''n''}(''x'') is not guaranteed to have the property of uniform convergence; the theorem only states that a set of polynomial functions exists, without providing a general method of finding one.
The ''P''_{''n''}(''x'') produced in this manner may in fact diverge away from ''f''(''x'') as ''n'' increases; this typically occurs in an oscillating pattern that magnifies near the ends of the interpolation points. This phenomenon is attributed to Runge.

_{''i''} between −1 and 1 such that:
:$x\_i\; =\; \backslash frac\; -\; 1,\backslash quad\; i\; \backslash in\; \backslash left\backslash $
with a _{''n''}(''x'') of degree ≤ ''n'', the resulting interpolation oscillates toward the end of the interval, i.e. close to −1 and 1. It can even be proven that the interpolation error increases (without bound) when the degree of the polynomial is increased:
:$\backslash lim\_\; \backslash left(\; \backslash max\_\; ,\; f(x)\; -P\_n(x),\; \backslash right)\; =\; \backslash infty.$
This shows that high-degree polynomial interpolation at equidistant points can be troublesome.

here

Interpolation
Continuous mappings
Numerical artefacts
de:Polynominterpolation#Runges Phänomen

mathematical
Mathematics (from Greek: ) includes the study of such topics as numbers (arithmetic and number theory), formulas and related structures (algebra), shapes and spaces in which they are contained (geometry), and quantities and their changes (cal ...

field of numerical analysis
Numerical analysis is the study of algorithms that use numerical approximation (as opposed to symbolic computation, symbolic manipulations) for the problems of mathematical analysis (as distinguished from discrete mathematics). Numerical analysis ...

, Runge's phenomenon () is a problem of oscillation at the edges of an interval that occurs when using polynomial interpolation
In numerical analysis
Numerical analysis is the study of algorithms that use numerical approximation (as opposed to symbolic computation, symbolic manipulations) for the problems of mathematical analysis (as distinguished from discrete mathemati ...

with polynomials of high degree over a set of equispaced interpolation points. It was discovered by Carl David Tolmé Runge (1901) when exploring the behavior of errors when using polynomial interpolation to approximate certain functions.
The discovery was important because it shows that going to higher degrees does not always improve accuracy. The phenomenon is similar to the Gibbs phenomenon In mathematics, the Gibbs phenomenon, discovered by Available on-line at:National Chiao Tung University: Open Course Ware: Hewitt & Hewitt, 1979. and rediscovered by , is the peculiar manner in which the Fourier series of a piecewise continuousl ...

in Fourier series approximations.
Introduction

TheWeierstrass approximation theorem
Karl Theodor Wilhelm Weierstrass (german: link=no, Weierstraß ; 31 October 1815 – 19 February 1897) was a German mathematician
A mathematician is someone who uses an extensive knowledge of mathematics
Mathematics (from Greek: ) incl ...

states that for every continuous function
In mathematics
Mathematics (from Greek: ) includes the study of such topics as numbers (arithmetic and number theory), formulas and related structures (algebra), shapes and spaces in which they are contained (geometry), and quantities and ...

''f''(''x'') defined on an interval 'a'',''b'' there exists a set of polynomial
In mathematics
Mathematics (from Ancient Greek, Greek: ) includes the study of such topics as quantity (number theory), mathematical structure, structure (algebra), space (geometry), and calculus, change (mathematical analysis, analysis). I ...

functions ''P''uniform convergenceIn the mathematical field of analysis, uniform convergence is a mode
Mode ( la, modus meaning "manner, tune, measure, due measure, rhythm, melody") may refer to:
Language
* Grammatical mode or grammatical mood, a category of verbal inflections t ...

over 'a'',''b''as ''n'' tends to infinity, that is,
:$\backslash lim\_\; \backslash left(\; \backslash max\_\; \backslash left,\; f(x)\; -P\_n(x)\backslash \; \backslash right)\; =\; 0.$
Consider the case where one desires to interpolate
In the mathematical
Mathematics (from Greek: ) includes the study of such topics as numbers (arithmetic and number theory), formulas and related structures (algebra), shapes and spaces in which they are contained (geometry), and quantitie ...

through ''n''+1 equispaced points of a function ''f''(''x'') using the ''n''-degree polynomial ''P''Problem

Consider the Runge function :$f(x)\; =\; \backslash frac\backslash ,$ (a scaled version of theWitch of Agnesi
In mathematics, the witch of Agnesi () is a cubic plane curve defined from two diametrically opposite points of a circle. It gets its name from Italian mathematician Maria Gaetana Agnesi, and from a mistranslation of an Italian word for a Sheet ...

).
Runge found that if this function is interpolation, interpolated at equidistant points ''x''polynomial
In mathematics
Mathematics (from Ancient Greek, Greek: ) includes the study of such topics as quantity (number theory), mathematical structure, structure (algebra), space (geometry), and calculus, change (mathematical analysis, analysis). I ...

''P''Reason

Runge's phenomenon is the consequence of two properties of this problem. * The magnitude of the ''n''-th order derivatives of this particular function grows quickly when ''n'' increases. * The equidistance between points leads to a Lebesgue constant (interpolation), Lebesgue constant that increases quickly when ''n'' increases. The phenomenon is graphically obvious because both properties combine to increase the magnitude of the oscillations. The error between the generating function and the interpolating polynomial of order ''n'' is given by :$f(x)\; -\; P\_n(x)\; =\; \backslash frac\; \backslash prod\_^\; (x\; -\; x\_i)$ for some $\backslash xi$ in (−1, 1). Thus, :$\backslash max\_\; ,\; f(x)\; -\; P\_n(x),\; \backslash leq\; \backslash max\_\; \backslash frac\; \backslash max\_\; \backslash prod\_^n\; ,\; x\; -\; x\_i,$. Denote by $w\_n(x)$ the nodal function :$w\_n(x)\; =\; (x\; -\; x\_0)(x\; -\; x\_1)\backslash cdots(x\; -\; x\_n)$ and let $W\_n$ be the maximum of the magnitude of the $w\_n$ function: :$W\_n=\backslash max\_\; ,\; w\_n(x),$. It is elementary to prove that with equidistant nodes :$W\_n\; \backslash leq\; n!h^$ where $h=2/n$ is the step size. Moreover, assume that the (''n''+1)-th derivative of $f$ is bounded, i.e. :$\backslash max\_\; ,\; f^(x),\; \backslash leq\; M\_$. Therefore, :$\backslash max\_\; ,\; f(x)\; -\; P\_(x),\; \backslash leq\; M\_\; \backslash frac$. But the magnitude of the (''n''+1)-th derivative of Runge's function increases when ''n'' increases, since $M\_\backslash leq(n+1)!5^$. The consequence is that the resulting upper bound, $(10/n)^n!$, tends to infinity when ''n'' tends to infinity. Although often used to explain the Runge phenomenon, the fact that the upper bound of the error goes to infinity does not necessarily imply, of course, that the error itself also diverges with ''n.''Mitigations to the problem

Change of interpolation points

The oscillation can be minimized by using nodes that are distributed more densely towards the edges of the interval, specifically, with asymptotic density (on the interval [−1,1]) given by the formula $1/\backslash sqrt$. A standard example of such a set of nodes is Chebyshev nodes, for which the maximum error in approximating the Runge function is guaranteed to diminish with increasing polynomial order.S-Runge algorithm without resampling

When equidistant samples must be used because resampling on well-behaved sets of nodes is not feasible, the S-Runge algorithm can be considered. In this approach, the original set of nodes is mapped on the set of Chebyshev nodes, providing a stable polynomial reconstruction. The peculiarity of this method is that there is no need of resampling at the mapped nodes, which are also called ''fake nodes''. A Python (programming language), Python implementation of this procedure can be founhere

Use of piecewise polynomials

The problem can be avoided by using spline curves which are piecewise polynomials. When trying to decrease the interpolation error one can increase the number of polynomial pieces which are used to construct the spline instead of increasing the degree of the polynomials used.Constrained minimization

One can also fit a polynomial of higher degree (for instance, with $n$ points use a polynomial of order $N\; =\; n^2$ instead of $n+1$), and fit an interpolating polynomial whose first (or second) derivative has minimal Lp space, $L^2$ norm. A similar approach is to minimize a constrained version of the $L^p$ distance between the polynomial's $m^$ derivative and the mean value of its $m^$ derivative. Explicitly, to minimize :$V\_p\; =\; \backslash int\_a^b\; \backslash left,\; \backslash frac\; -\; \backslash frac\; \backslash int\_a^b\; \backslash frac\; \backslash operatornamez\backslash ^p\; \backslash operatorname\; x\; -\; \backslash sum\_^n\; \backslash lambda\_i\; \backslash ,\; \backslash left(P\_N(x\_i)\; -\; f(x\_i)\backslash right),$ where $N\; \backslash ge\; n\; -\; 1$ and $m\; <\; N$, with respect to the polynomial coefficients and the Lagrange multipliers, $\backslash lambda\_i$. When $N\; =\; n\; -\; 1$, the constraint equations generated by the Lagrange multipliers reduce $P\_N(x)$ to the minimum polynomial that passes through all $n$ points. At the opposite end, $\backslash lim\_\; P\_N(x)$ will approach a form very similar to a piecewise polynomials approximation. When $m=1$, in particular, $\backslash lim\_\; P\_N(x)$ approaches the linear piecewise polynomials, i.e. connecting the interpolation points with straight lines. The role played by $p$ in the process of minimizing $V\_p$ is to control the importance of the size of the fluctuations away from the mean value. The larger $p$ is, the more large fluctuations are penalized compared to small ones. The greatest advantage of the Euclidean norm, $p=2$, is that it allows for analytic solutions and it guarantees that $V\_p$ will only have a single minimum. When $p\backslash neq\; 2$ there can be multiple minima in $V\_p$, making it difficult to ensure that a particular minimum found will be the Maxima and minima, global minimum instead of a local one.Least squares fitting

Another method is fitting a polynomial of lower degree using the method of least squares. Generally, when using $m$ equidistant points, if $N<2\backslash sqrt$ then least squares approximation $P\_N(x)$ is well-conditioned.Bernstein polynomial

Using Bernstein polynomials, one can uniformly approximate every continuous function in a closed interval, although this method is rather computationally expensive.External Fake Constraints Interpolation

This method proposes to optimally stack a dense distribution of constraints of the type P"(x)=0 on nodes positioned externally near the endpoints of each side of the interpolation interval, where P"(x) is the second derivative of the interpolation polynomial. Those constraints are called External Fake Constraints as they do not belong to the interpolation interval and they do not match the behaviour of the Runge function. The method has demonstrated that it has a better interpolation performance than Piecewise polynomials (splines) to mitigate the Runge Phenomenon.Related statements from the approximation theory

For every predefined table of interpolation nodes there is a continuous function for which the sequence of interpolation polynomials on those nodes diverges. For every continuous function there is a table of nodes on which the interpolation process converges. {{Citation needed, date=March 2014 Chebyshev interpolation (i.e., on Chebyshev nodes) converges uniformly for every absolutely continuous function.See also

* Compare with theGibbs phenomenon In mathematics, the Gibbs phenomenon, discovered by Available on-line at:National Chiao Tung University: Open Course Ware: Hewitt & Hewitt, 1979. and rediscovered by , is the peculiar manner in which the Fourier series of a piecewise continuousl ...

for sinusoidal basis functions
* Taylor series
* Chebyshev nodes
* Schwarz lantern, another example of failure of convergence
* Stone–Weierstrass theorem
References