HOME

TheInfoList



OR:

In the
mathematical Mathematics is an area of knowledge that includes the topics of numbers, formulas and related structures, shapes and the spaces in which they are contained, and quantities and their changes. These topics are represented in modern mathematics ...
field of
numerical analysis Numerical analysis is the study of algorithms that use numerical approximation (as opposed to symbolic manipulations) for the problems of mathematical analysis (as distinguished from discrete mathematics). It is the study of numerical methods ...
, interpolation is a type of
estimation Estimation (or estimating) is the process of finding an estimate or approximation, which is a value that is usable for some purpose even if input data may be incomplete, uncertain, or unstable. The value is nonetheless usable because it is de ...
, a method of constructing (finding) new
data points In statistics, a unit of observation is the unit described by the data that one analyzes. A study may treat groups as a unit of observation with a country as the unit of analysis, drawing conclusions on group characteristics from data collected at ...
based on the range of a discrete set of known data points. In
engineering Engineering is the use of scientific principles to design and build machines, structures, and other items, including bridges, tunnels, roads, vehicles, and buildings. The discipline of engineering encompasses a broad range of more speciali ...
and
science Science is a systematic endeavor that Scientific method, builds and organizes knowledge in the form of Testability, testable explanations and predictions about the universe. Science may be as old as the human species, and some of the earli ...
, one often has a number of data points, obtained by sampling or
experimentation An experiment is a procedure carried out to support or refute a hypothesis, or determine the efficacy or likelihood of something previously untried. Experiments provide insight into cause-and-effect by demonstrating what outcome occurs when a ...
, which represent the values of a function for a limited number of values of the independent variable. It is often required to interpolate; that is, estimate the value of that function for an intermediate value of the independent variable. A closely related problem is the approximation of a complicated function by a simple function. Suppose the formula for some given function is known, but too complicated to evaluate efficiently. A few data points from the original function can be interpolated to produce a simpler function which is still fairly close to the original. The resulting gain in simplicity may outweigh the loss from interpolation error and give better performance in calculation process.


Example

This table gives some values of an unknown function f(x). Interpolation provides a means of estimating the function at intermediate points, such as x=2.5. We describe some methods of interpolation, differing in such properties as: accuracy, cost, number of data points needed, and smoothness of the resulting
interpolant In the mathematical field of numerical analysis, interpolation is a type of estimation, a method of constructing (finding) new data points based on the range of a discrete set of known data points. In engineering and science, one often has a ...
function.


Piecewise constant interpolation

The simplest interpolation method is to locate the nearest data value, and assign the same value. In simple problems, this method is unlikely to be used, as
linear Linearity is the property of a mathematical relationship ('' function'') that can be graphically represented as a straight line. Linearity is closely related to '' proportionality''. Examples in physics include rectilinear motion, the linear ...
interpolation (see below) is almost as easy, but in higher-dimensional
multivariate interpolation In numerical analysis, multivariate interpolation is interpolation on functions of more than one variable; when the variates are spatial coordinates, it is also known as spatial interpolation. The function to be interpolated is known at given poi ...
, this could be a favourable choice for its speed and simplicity.


Linear interpolation

One of the simplest methods is linear interpolation (sometimes known as lerp). Consider the above example of estimating ''f''(2.5). Since 2.5 is midway between 2 and 3, it is reasonable to take ''f''(2.5) midway between ''f''(2) = 0.9093 and ''f''(3) = 0.1411, which yields 0.5252. Generally, linear interpolation takes two data points, say (''x''''a'',''y''''a'') and (''x''''b'',''y''''b''), and the interpolant is given by: : y = y_a + \left( y_b-y_a \right) \frac \text \left( x,y \right) : \frac = \frac : \frac = \frac This previous equation states that the slope of the new line between (x_a,y_a) and (x,y) is the same as the slope of the line between (x_a,y_a) and (x_b,y_b) Linear interpolation is quick and easy, but it is not very precise. Another disadvantage is that the interpolant is not
differentiable In mathematics, a differentiable function of one real variable is a function whose derivative exists at each point in its domain. In other words, the graph of a differentiable function has a non-vertical tangent line at each interior point in its ...
at the point ''x''''k''. The following error estimate shows that linear interpolation is not very precise. Denote the function which we want to interpolate by ''g'', and suppose that ''x'' lies between ''x''''a'' and ''x''''b'' and that ''g'' is twice continuously differentiable. Then the linear interpolation error is : , f(x)-g(x), \le C(x_b-x_a)^2 \quad\text\quad C = \frac18 \max_ , g''(r), . In words, the error is proportional to the square of the distance between the data points. The error in some other methods, including
polynomial interpolation In numerical analysis, polynomial interpolation is the interpolation of a given data set by the polynomial of lowest possible degree that passes through the points of the dataset. Given a set of data points (x_0,y_0), \ldots, (x_n,y_n), with no ...
and spline interpolation (described below), is proportional to higher powers of the distance between the data points. These methods also produce smoother interpolants.


Polynomial interpolation

Polynomial interpolation is a generalization of linear interpolation. Note that the linear interpolant is a
linear function In mathematics, the term linear function refers to two distinct but related notions: * In calculus and related areas, a linear function is a function whose graph is a straight line, that is, a polynomial function of degree zero or one. For dist ...
. We now replace this interpolant with a
polynomial In mathematics, a polynomial is an expression consisting of indeterminates (also called variables) and coefficients, that involves only the operations of addition, subtraction, multiplication, and positive-integer powers of variables. An example ...
of higher degree. Consider again the problem given above. The following sixth degree polynomial goes through all the seven points: : f(x) = -0.0001521 x^6 - 0.003130 x^5 + 0.07321 x^4 - 0.3577 x^3 + 0.2255 x^2 + 0.9038 x. Substituting ''x'' = 2.5, we find that ''f''(2.5) = ~0.59678. Generally, if we have ''n'' data points, there is exactly one polynomial of degree at most ''n''−1 going through all the data points. The interpolation error is proportional to the distance between the data points to the power ''n''. Furthermore, the interpolant is a polynomial and thus infinitely differentiable. So, we see that polynomial interpolation overcomes most of the problems of linear interpolation. However, polynomial interpolation also has some disadvantages. Calculating the interpolating polynomial is computationally expensive (see computational complexity) compared to linear interpolation. Furthermore, polynomial interpolation may exhibit oscillatory artifacts, especially at the end points (see
Runge's phenomenon In the mathematical field of numerical analysis, Runge's phenomenon () is a problem of oscillation at the edges of an interval that occurs when using polynomial interpolation with polynomials of high degree over a set of equispaced interpolation ...
). Polynomial interpolation can estimate local maxima and minima that are outside the range of the samples, unlike linear interpolation. For example, the interpolant above has a local maximum at ''x'' ≈ 1.566, ''f''(''x'') ≈ 1.003 and a local minimum at ''x'' ≈ 4.708, ''f''(''x'') ≈ −1.003. However, these maxima and minima may exceed the theoretical range of the function; for example, a function that is always positive may have an interpolant with negative values, and whose inverse therefore contains false vertical asymptotes. More generally, the shape of the resulting curve, especially for very high or low values of the independent variable, may be contrary to commonsense; that is, to what is known about the experimental system which has generated the data points. These disadvantages can be reduced by using spline interpolation or restricting attention to Chebyshev polynomials.


Spline interpolation

Remember that linear interpolation uses a linear function for each of intervals 'x''''k'',''x''''k+1'' Spline interpolation uses low-degree polynomials in each of the intervals, and chooses the polynomial pieces such that they fit smoothly together. The resulting function is called a spline. For instance, the natural cubic spline is
piecewise In mathematics, a piecewise-defined function (also called a piecewise function, a hybrid function, or definition by cases) is a function defined by multiple sub-functions, where each sub-function applies to a different interval in the domain. P ...
cubic and twice continuously differentiable. Furthermore, its second derivative is zero at the end points. The natural cubic spline interpolating the points in the table above is given by : f(x) = \begin -0.1522 x^3 + 0.9937 x, & \text x \in ,1 \\ -0.01258 x^3 - 0.4189 x^2 + 1.4126 x - 0.1396, & \text x \in ,2 \\ 0.1403 x^3 - 1.3359 x^2 + 3.2467 x - 1.3623, & \text x \in ,3 \\ 0.1579 x^3 - 1.4945 x^2 + 3.7225 x - 1.8381, & \text x \in ,4 \\ 0.05375 x^3 -0.2450 x^2 - 1.2756 x + 4.8259, & \text x \in ,5 \\ -0.1871 x^3 + 3.3673 x^2 - 19.3370 x + 34.9282, & \text x \in ,6 \end In this case we get ''f''(2.5) = 0.5972. Like polynomial interpolation, spline interpolation incurs a smaller error than linear interpolation, while the interpolant is smoother and easier to evaluate than the high-degree polynomials used in polynomial interpolation. However, the global nature of the basis functions leads to ill-conditioning. This is completely mitigated by using splines of compact support, such as are implemented in Boost.Math and discussed in Kress.


Mimetic interpolation

Depending on the underlying discretisation of fields, different interpolants may be required. In contrast to other interpolation methods, which estimate functions on target points, mimetic interpolation evaluates the integral of fields on target lines, areas or volumes, depending on the type of field (scalar, vector, pseudo-vector or pseudo-scalar). A key feature of mimetic interpolation is that vector calculus identities are satisfied, including Stokes' theorem and the divergence theorem. As a result, mimetic interpolation conserves line, area and volume integrals. Conservation of line integrals might be desirable when interpolating the electric field, for instance, since the line integral gives the
electric potential The electric potential (also called the ''electric field potential'', potential drop, the electrostatic potential) is defined as the amount of work energy needed to move a unit of electric charge from a reference point to the specific point in ...
difference at the endpoints of the integration path. Mimetic interpolation ensures that the error of estimating the line integral of an electric field is the same as the error obtained by interpolating the potential at the end points of the integration path, regardless of the length of the integration path.
Linear Linearity is the property of a mathematical relationship ('' function'') that can be graphically represented as a straight line. Linearity is closely related to '' proportionality''. Examples in physics include rectilinear motion, the linear ...
, bilinear and
trilinear interpolation Trilinear interpolation is a method of multivariate interpolation on a 3-dimensional regular grid. It approximates the value of a function at an intermediate point (x, y, z) within the local axial rectangular prism linearly, using function data ...
are also considered mimetic, even if it is the field values that are conserved (not the integral of the field). Apart from linear interpolation, area weighted interpolation can be considered as one of the first mimetic interpolation method to have been developed.


Function approximation

Interpolation is a common way to approximate functions. Given a function f: ,b\to \mathbb with a set of points x_1, x_2, \dots, x_n \in
, b The comma is a punctuation mark that appears in several variants in different languages. It has the same shape as an apostrophe or single closing quotation mark () in many typefaces, but it differs from them in being placed on the baseline o ...
/math> one can form a function s: ,b\to \mathbb such that f(x_i)=s(x_i) for i=1, 2, \dots, n (that is, that s interpolates f at these points). In general, an interpolant need not be a good approximation, but there are well known and often reasonable conditions where it will. For example, if f\in C^4( ,b (four times continuously differentiable) then cubic spline interpolation has an error bound given by \, f-s\, _\infty \leq C \, f^\, _\infty h^4 where h \max_ , x_-x_i, and C is a constant.


Via Gaussian processes

Gaussian process is a powerful non-linear interpolation tool. Many popular interpolation tools are actually equivalent to particular Gaussian processes. Gaussian processes can be used not only for fitting an interpolant that passes exactly through the given data points but also for regression; that is, for fitting a curve through noisy data. In the geostatistics community Gaussian process regression is also known as
Kriging In statistics, originally in geostatistics, kriging or Kriging, also known as Gaussian process regression, is a method of interpolation based on Gaussian process governed by prior covariances. Under suitable assumptions of the prior, kriging giv ...
.


Other forms

Other forms of interpolation can be constructed by picking a different class of interpolants. For instance, rational interpolation is interpolation by rational functions using
Padé approximant In mathematics, a Padé approximant is the "best" approximation of a function near a specific point by a rational function of given order. Under this technique, the approximant's power series agrees with the power series of the function it is ap ...
, and trigonometric interpolation is interpolation by trigonometric polynomials using Fourier series. Another possibility is to use
wavelet A wavelet is a wave-like oscillation with an amplitude that begins at zero, increases or decreases, and then returns to zero one or more times. Wavelets are termed a "brief oscillation". A taxonomy of wavelets has been established, based on the num ...
s. The
Whittaker–Shannon interpolation formula The Whittaker–Shannon interpolation formula or sinc interpolation is a method to construct a continuous-time bandlimited function from a sequence of real numbers. The formula dates back to the works of E. Borel in 1898, and E. T. Whittaker i ...
can be used if the number of data points is infinite or if the function to be interpolated has compact support. Sometimes, we know not only the value of the function that we want to interpolate, at some points, but also its derivative. This leads to
Hermite interpolation In numerical analysis, Hermite interpolation, named after Charles Hermite, is a method of polynomial interpolation, which generalizes Lagrange interpolation. Lagrange interpolation allows computing a polynomial of degree less than that takes the s ...
problems. When each data point is itself a function, it can be useful to see the interpolation problem as a partial advection problem between each data point. This idea leads to the
displacement interpolation Displacement may refer to: Physical sciences Mathematics and Physics *Displacement (geometry), is the difference between the final and initial position of a point trajectory (for instance, the center of mass of a moving object). The actual path ...
problem used in transportation theory.


In higher dimensions

Multivariate interpolation is the interpolation of functions of more than one variable. Methods include
bilinear interpolation In mathematics, bilinear interpolation is a method for interpolating functions of two variables (e.g., ''x'' and ''y'') using repeated linear interpolation. It is usually applied to functions sampled on a 2D rectilinear grid, though it can be ...
and bicubic interpolation in two dimensions, and
trilinear interpolation Trilinear interpolation is a method of multivariate interpolation on a 3-dimensional regular grid. It approximates the value of a function at an intermediate point (x, y, z) within the local axial rectangular prism linearly, using function data ...
in three dimensions. They can be applied to gridded or scattered data. Mimetic interpolation generalizes to n dimensional spaces where n > 3. Image:Nearest2DInterpolExample.png, Nearest neighbor Image:BilinearInterpolExample.png, Bilinear Image:BicubicInterpolationExample.png, Bicubic


In digital signal processing

In the domain of digital signal processing, the term interpolation refers to the process of converting a sampled digital signal (such as a sampled audio signal) to that of a higher sampling rate (
Upsampling In digital signal processing, upsampling, expansion, and interpolation are terms associated with the process of resampling in a multi-rate digital signal processing system. ''Upsampling'' can be synonymous with ''expansion'', or it can describe a ...
) using various digital filtering techniques (for example, convolution with a frequency-limited impulse signal). In this application there is a specific requirement that the harmonic content of the original signal be preserved without creating aliased harmonic content of the original signal above the original Nyquist limit of the signal (that is, above fs/2 of the original signal sample rate). An early and fairly elementary discussion on this subject can be found in Rabiner and Crochiere's book ''Multirate Digital Signal Processing''.


Related concepts

The term ''
extrapolation In mathematics, extrapolation is a type of estimation, beyond the original observation range, of the value of a variable on the basis of its relationship with another variable. It is similar to interpolation, which produces estimates between know ...
'' is used to find data points outside the range of known data points. In
curve fitting Curve fitting is the process of constructing a curve, or mathematical function, that has the best fit to a series of data points, possibly subject to constraints. Curve fitting can involve either interpolation, where an exact fit to the data i ...
problems, the constraint that the interpolant has to go exactly through the data points is relaxed. It is only required to approach the data points as closely as possible (within some other constraints). This requires parameterizing the potential interpolants and having some way of measuring the error. In the simplest case this leads to least squares approximation. Approximation theory studies how to find the best approximation to a given function by another function from some predetermined class, and how good this approximation is. This clearly yields a bound on how well the interpolant can approximate the unknown function.


Generalization

If we consider x as a variable in a
topological space In mathematics, a topological space is, roughly speaking, a geometrical space in which closeness is defined but cannot necessarily be measured by a numeric distance. More specifically, a topological space is a set whose elements are called po ...
, and the function f(x) mapping to a Banach space, then the problem is treated as "interpolation of operators".Colin Bennett, Robert C. Sharpley, ''Interpolation of Operators'', Academic Press 1988 The classical results about interpolation of operators are the
Riesz–Thorin theorem In mathematics, the Riesz–Thorin theorem, often referred to as the Riesz–Thorin interpolation theorem or the Riesz–Thorin convexity theorem, is a result about ''interpolation of operators''. It is named after Marcel Riesz and his student G ...
and the
Marcinkiewicz theorem In mathematics, the Marcinkiewicz interpolation theorem, discovered by , is a result bounding the norms of non-linear operators acting on ''L''p spaces. Marcinkiewicz' theorem is similar to the Riesz–Thorin theorem about linear operators, bu ...
. There are also many other subsequent results.


See also

*
Barycentric coordinates In mathematics, an affine space is a geometric structure that generalizes some of the properties of Euclidean spaces in such a way that these are independent of the concepts of distance and measure of angles, keeping only the properties related ...
– for interpolating within on a triangle or tetrahedron *
Brahmagupta's interpolation formula Brahmagupta's interpolation formula is a second-order polynomial interpolation formula developed by the Indian mathematician and astronomer Brahmagupta (598–668 CE) in the early 7th century CE. The Sanskrit couplet describing the formula can be ...
* Fractal interpolation *
Imputation (statistics) In statistics, imputation is the process of replacing missing data with substituted values. When substituting for a data point, it is known as "unit imputation"; when substituting for a component of a data point, it is known as "item imputation". ...
*
Lagrange interpolation In numerical analysis, the Lagrange interpolating polynomial is the unique polynomial of lowest degree that interpolates a given set of data. Given a data set of coordinate pairs (x_j, y_j) with 0 \leq j \leq k, the x_j are called ''nodes'' an ...
*
Missing data In statistics, missing data, or missing values, occur when no data value is stored for the variable in an observation. Missing data are a common occurrence and can have a significant effect on the conclusions that can be drawn from the data. Mis ...
*
Newton–Cotes formulas In numerical analysis, the Newton–Cotes formulas, also called the Newton–Cotes quadrature rules or simply Newton–Cotes rules, are a group of formulas for numerical integration (also called ''quadrature'') based on evaluating the integrand at ...
*
Radial basis function interpolation Radial basis function (RBF) interpolation is an advanced method in approximation theory for constructing Order of accuracy, high-order accurate interpolation, interpolants of unstructured data, possibly in high-dimensional spaces. The interpolant ta ...
* Simple rational approximation


References


External links

* Online tools fo
linearquadraticcubic spline
an
polynomial
interpolation with visualisation and
JavaScript JavaScript (), often abbreviated as JS, is a programming language that is one of the core technologies of the World Wide Web, alongside HTML and CSS. As of 2022, 98% of websites use JavaScript on the client side for webpage behavior, of ...
source code.
Sol Tutorials - Interpolation Tricks


* ttp://www.boost.org/doc/libs/release/libs/math/doc/html/math_toolkit/barycentric.html Barycentric rational interpolation in Boost.Math
Interpolation via the Chebyshev transform in Boost.Math
{{Authority control Video Video signal