HOME

TheInfoList



OR:

In
applied mathematics Applied mathematics is the application of mathematics, mathematical methods by different fields such as physics, engineering, medicine, biology, finance, business, computer science, and Industrial sector, industry. Thus, applied mathematics is a ...
, polyharmonic splines are used for
function approximation In general, a function approximation problem asks us to select a function (mathematics), function among a that closely matches ("approximates") a in a task-specific way. The need for function approximations arises in many branches of applied ...
and data
interpolation In the mathematics, mathematical field of numerical analysis, interpolation is a type of estimation, a method of constructing (finding) new data points based on the range of a discrete set of known data points. In engineering and science, one ...
. They are very useful for interpolating and fitting scattered data in many dimensions. Special cases include thin plate splines and natural cubic splines in one dimension.


Definition

A polyharmonic spline is a linear combination of polyharmonic
radial basis function In mathematics a radial basis function (RBF) is a real-valued function \varphi whose value depends only on the distance between the input and some fixed point, either the origin, so that \varphi(\mathbf) = \hat\varphi(\left\, \mathbf\right\, ), o ...
s (RBFs) denoted by \varphi plus a polynomial term: where * \mathbf = _1 \ x_2 \ \cdots \ x_ (\textrm denotes matrix transpose, meaning \mathbf is a column vector) is a real-valued vector of d independent variables, * \mathbf_i = _ \ c_ \ \cdots \ c_ are N vectors of the same size as \mathbf (often called centers) that the curve or surface must interpolate, * \mathbf = _1 \ w_2 \ \cdots \ w_N are the N weights of the RBFs, * \mathbf = _1 \ v_2 \ \cdots \ v_ are the d+1 weights of the polynomial. The polynomial with the coefficients \mathbf improves fitting accuracy for polyharmonic smoothing splines and also improves extrapolation away from the centers \mathbf_i. See figure below for comparison of splines with polynomial term and without polynomial term. The polyharmonic RBFs are of the form: : \begin \varphi(r) &= \begin r^k & \text k=1,3,5,\ldots, \\ r^k \ln(r) & \text k=2,4,6,\ldots \end \\ mm r &= , \mathbf - \mathbf_i, = \sqrt. \end Other values of the exponent k are not useful (such as \varphi(r) = r^2 ), because a solution of the interpolation problem might not exist. To avoid problems at r=0 (since \log(0) = -\infty), the polyharmonic RBFs with the natural logarithm might be implemented as: : \varphi(r) = \begin r^ \ln(r^r) & \text r < 1, \quad \text 0^0\text \\ r^k \ln(r) & \text r \ge 1. \end or, more simply adding a continuity extension in r=0 : \varphi(r) = \begin 0 & \text r< \epsilon, \quad \text\epsilon\text \epsilon=10^\text\\ r^k \ln(r) & \text r \ge \epsilon. \end The weights w_i and v_j are determined such that the function interpolates N given points (\mathbf_i, f_i) (for i=1,2,\ldots,N) and fulfills the d+1 orthogonality conditions : \sum_^N w_i=0, \;\; \sum_^N w_i \mathbf_i=\mathbf. All together, these constraints are equivalent to the symmetric linear system of equations where : A_ = \varphi(, \mathbf_i - \mathbf_j, ), \quad B = \begin 1 & 1 & \cdots & 1 \\ \mathbf_1 & \mathbf_2 & \cdots & \mathbf_N \end^, \quad \mathbf = _1, f_2, \ldots, f_N. In order for this system of equations to have a unique solution, B must be full rank. B is full rank for very mild conditions on the input data. For example, in two dimensions, three centers forming a non-degenerate triangle ensure that B is full rank, and in three dimensions, four centers forming a non-degenerate tetrahedron ensure that B is full rank. As explained later, the linear transformation resulting from the restriction of the domain of the linear transformation A to the
null space In mathematics, the kernel of a linear map, also known as the null space or nullspace, is the part of the domain which is mapped to the zero vector of the co-domain; the kernel is always a linear subspace of the domain. That is, given a linear ...
of B^ is positive definite. This means that if B is full rank, the system of equations () always has a unique solution and it can be solved using a linear solver specialised for symmetric matrices. The computed weights allow evaluation of the spline for any \mathbf\in\mathbb^d using equation (). Many practical details of implementing and using polyharmonic splines are explained in Fasshauer. In Iske polyharmonic splines are treated as special cases of other multiresolution methods in scattered data modelling.


Discussion

The main advantage of polyharmonic spline interpolation is that usually very good interpolation results are obtained for scattered data without performing any "tuning", so automatic interpolation is feasible. This is not the case for other radial basis functions. For example, the Gaussian function e^ needs to be tuned, so that k is selected according to the underlying grid of the independent variables. If this grid is non-uniform, a proper selection of k to achieve a good interpolation result is difficult or impossible. Main disadvantages are: * To determine the weights, a dense linear system of equations must be solved. Solving a dense linear system becomes impractical if the dimension N is large, since the memory required is O(N^2) and the number of operations required is O(N^3). * Evaluating the computed polyharmonic spline function at M data points requires O(MN) operations. In many applications (image processing is an example), M is much larger than N, and if both numbers are large, this is not practical.


Fast construction and evaluation methods

One straightforward approach to speeding up model construction and evaluation is to use a subset of k nearest interpolation nodes to build a local model every time we evaluate the spline. As a result, the total time needed for model construction and evaluation at M points changes from O(N^3+MN) to O(k^3*M). This can yield better timings if k is much less than N. Such an approach is advocated by some software libraries, the most notable bein
scipy.interpolate.RBFInterpolator
The main drawback is that it introduces small discontinuities in the spline and requires problem-specific tuning: a proper choice of the neighbors count, k. Recently, methods have been developed to overcome the aforementioned difficulties without sacrificing main advantages of polyharmonic splines. First, a bunch of methods for fast O(\log N) evaluation were proposed: * Beatson et al. present a method to interpolate polyharmonic splines with r^ being a basis function at one point in 3 dimensions or less * Cherrie et al. present a method to interpolate polyharmonic splines with r^\log r as a basis function at one point in 4 dimensions or less Second, an accelerated model construction by applying an iterative solver to an ACBF-preconditioned linear system was proposed by Brown et al. This approach reduces running time from O(N^3) to O(N^2), and further to O(N\log N) when combined with accelerated evaluation techniques. The approaches above are often employed by commercial geospatial data analysis libraries and by some open source implementations (e.g. ALGLIB). Sometimes domain decomposition methods are used to improve asymptotic behavior, reducing memory requirements from O(N^2) to O(N), thus making polyharmonic splines suitable for datasets with more than 1.000.000 points.


Reason for the name "polyharmonic"

A polyharmonic equation is a
partial differential equation In mathematics, a partial differential equation (PDE) is an equation which involves a multivariable function and one or more of its partial derivatives. The function is often thought of as an "unknown" that solves the equation, similar to ho ...
of the form \Delta^m f = 0 for any natural number m, where \Delta is the
Laplace operator In mathematics, the Laplace operator or Laplacian is a differential operator given by the divergence of the gradient of a Scalar field, scalar function on Euclidean space. It is usually denoted by the symbols \nabla\cdot\nabla, \nabla^2 (where \ ...
. For example, the
biharmonic equation In mathematics, the biharmonic equation is a fourth-order partial differential equation which arises in areas of continuum mechanics, including linear elasticity theory and the solution of Stokes flows. Specifically, it is used in the modeling of t ...
is \Delta^2 f = 0 and the triharmonic equation is \Delta^3 f = 0. All the polyharmonic radial basis functions are solutions of a polyharmonic equation (or more accurately, a modified polyharmonic equation with a
Dirac delta function In mathematical analysis, the Dirac delta function (or distribution), also known as the unit impulse, is a generalized function on the real numbers, whose value is zero everywhere except at zero, and whose integral over the entire real line ...
on the right hand side instead of 0). For example, the thin plate radial basis function is a solution of the modified 2-dimensional biharmonic equation. Applying the 2D Laplace operator (\Delta = \partial_ + \partial_) to the thin plate radial basis function f_(x,y) = (x^2+y^2) \log \sqrt either by hand or using a
computer algebra system A computer algebra system (CAS) or symbolic algebra system (SAS) is any mathematical software with the ability to manipulate mathematical expressions in a way similar to the traditional manual computations of mathematicians and scientists. The de ...
shows that \Delta f_ = 4 + 4\log r. Applying the Laplace operator to \Delta f_ (this is \Delta^2 f_) yields 0. But 0 is not exactly correct. To see this, replace r^2=x^2+y^2 with \rho^2 = x^2+y^2+h^2 (where h is some small number tending to 0). The Laplace operator applied to 4 \log \rho yields \Delta^2 f_ = 8h^2 / \rho^4. For (x,y)=(0,0), the right hand side of this equation approaches infinity as h approaches 0. For any other (x,y), the right hand side approaches 0 as h approaches 0. This indicates that the right hand side is a Dirac delta function. A computer algebra system will show that :\lim_\int_^\infty \int_^\infty 8h^2/(x^2+y^2+h^2)^2 \,dx \,dy = 8\pi. So the thin plate radial basis function is a solution of the equation \Delta^2 f_ = 8\pi\delta(x,y). Applying the 3D Laplacian (\Delta = \partial_ + \partial_ + \partial_) to the biharmonic RBF f_(x,y,z)=\sqrt yields \Delta f_ = 2/r and applying the 3D \Delta^2 operator to the triharmonic RBF f_(x,y,z) = (x^2+y^2+z^2)^ yields \Delta^2 f_ = 24/r. Letting \rho^2 = x^2+y^2+z^2+h^2 and computing \Delta(1/\rho) = -3h^2 / \rho^5 again indicates that the right hand side of the PDEs for the biharmonic and triharmonic RBFs are Dirac delta functions. Since :\lim_\int_^\infty\int_^\infty\int_^\infty -3h^2/(x^2+y^2+z^2+h^2)^ \,dx \,dy \,dz = -4\pi, the exact PDEs satisfied by the biharmonic and triharmonic RBFs are \Delta^2 f_ = -8\pi\delta(x,y,z) and \Delta^3 f_ = -96\pi\delta(x,y,z).


Polyharmonic smoothing splines

Polyharmonic splines minimize where \mathcal is some box in \mathbb^d containing a neighborhood of all the centers, \lambda is some positive constant, and \nabla^m f is the vector of all mth order partial derivatives of f. For example, in 2D \nabla^1 f = (f_x\ f_y) and \nabla^2 f = (f_ \ f_ \ f_ \ f_) and in 3D \nabla^2 f = (f_ \ f_ \ f_ \ f_ \ f_ \ f_ \ f_ \ f_ \ f_). In 2D , \nabla^2 f, ^2 = f_^2 + 2f_^2 + f_^2, making the integral the simplified thin plate energy functional. To show that polyharmonic splines minimize equation (), the fitting term must be transformed into an integral using the definition of the Dirac delta function: :\sum_^N (f(\mathbf_i) - f_i)^2 = \int_\sum_^N (f(\mathbf) - f_i)^2 \delta(\mathbf - \mathbf_i) \,d\mathbf. So equation () can be written as the functional :J = \int_ F(\mathbf,f, \partial^f, \partial^f, \ldots, \partial^f ) \,d\mathbf = \int_ \left \nabla^m f, ^2 \right,d\mathbf. where \alpha_i is a multi-index that ranges over all partial derivatives of order m for \mathbb^d. In order to apply the Euler–Lagrange equation for a single function of multiple variables and higher order derivatives, the quantities : = 2\sum_^N (f(\mathbf) - f_i) \delta(\mathbf - x_i) and :\sum_^n \partial^ = 2\lambda \Delta^m f are needed. Inserting these quantities into the E−L equation shows that A weak solution f(\mathbf) of () satisfies for all smooth test functions g that vanish outside of \mathcal. A weak solution of equation () will still minimize () while getting rid of the delta function through integration. Let f be a polyharmonic spline as defined by equation (). The following calculations will show that f satisfies (). Applying the \Delta^m operator to equation () yields : \Delta^m f = \sum_^M w_i C_ \delta(\mathbf - \mathbf_i) where C_ = 8\pi, C_=-8\pi, and C_=-96\pi. So () is equivalent to The only possible solution to () for all test functions g is (which implies interpolation if \lambda=0). Combining the definition of f in equation () with equation () results in almost the same linear system as equation () except that the matrix A is replaced with A + (-1)^m C_\lambda I where I is the N\times N identity matrix. For example, for the 3D triharmonic RBFs, A is replaced with A + 96\pi\lambda I.


Explanation of additional constraints

In (), the bottom half of the system of equations (B^\mathbf = 0) is given without explanation. The explanation first requires deriving a simplified form of \int_ , \nabla^m f, ^2 \,d\mathbf when \mathcal is all of \mathbb^d. First, require that \sum_^N w_i =0. This ensures that all derivatives of order m and higher of f(\mathbf) = \sum_^N w_i \varphi(, \mathbf - \mathbf_i, ) vanish at infinity. For example, let m=3 and d=3 and \varphi be the triharmonic RBF. Then \varphi_ = 3y(x^2+y^2) / (x^2+y^2+z^2)^ (considering \varphi as a mapping from \mathbb^3 to \mathbb). For a given center \mathbf = (P_1,P_2,P_3), :\varphi_(\mathbf - \mathbf) = \frac. On a line \mathbf = \mathbf + t\mathbf for arbitrary point \mathbf and unit vector \mathbf, :\varphi_(\mathbf - \mathbf) = \frac. Dividing both numerator and denominator of this by t^3 shows that \lim_ \varphi_(\mathbf-\mathbf) = 3b_2(b_2^2 + b_1^2) / (b_1^2 + b_2^2 + b_3^2)^, a quantity independent of the center \mathbf. So on the given line, : \lim_ f_(\mathbf) = \lim_\sum_^N w_i \varphi_(\mathbf - \mathbf_i) = \left(\sum_^N w_i\right)3b_2(b_2^2 + b_1^2) / (b_1^2 + b_2^2 + b_3^2)^ = 0. It is not quite enough to require that \sum_^N w_i =0, because in what follows it is necessary for f_g_ to vanish at infinity, where \alpha and \beta are multi-indices such that , \alpha, +, \beta, =2m-1. For triharmonic \varphi, w_i u_j\varphi_\alpha(\mathbf-\mathbf_i) \varphi_\beta(\mathbf - \mathbf_j) (where u_j and \mathbf_j are the weights and centers of g) is always a sum of total degree 5 polynomials in x, y, and z divided by the square root of a total degree 8 polynomial. Consider the behavior of these terms on the line \mathbf = \mathbf + t\mathbf as t approaches infinity. The numerator is a degree 5 polynomial in t. Dividing numerator and denominator by t^4 leaves the degree 4 and 5 terms in the numerator and a function of \mathbf only in the denominator. A degree 5 term divided by t^4 is a product of five b coordinates and t. The \sum w = 0 (and \sum u=0) constraint makes this vanish everywhere on the line. A degree 4 term divided by t^4 is either a product of four b coordinates and an a coordinate or a product of four b coordinates and a single c_i or d_j coordinate. The \sum w = 0 constraint makes the first type of term vanish everywhere on the line. The additional constraints \sum_^N w_i \mathbf_i = 0 will make the second type of term vanish. Now define the inner product of two functions f,g:\mathbb^d \to \mathbb defined as a linear combination of polyharmonic RBFs \varphi_ with \sum w = 0 and \sum w \mathbf=0 as :\langle f, g \rangle = \int_ (\nabla^m f) \cdot (\nabla^m g) \, d\mathbf. Integration by parts shows that For example, let m=2 and d=2. Then Integrating the first term of this by parts once yields :\int_^\infty \int_^\infty f_g_ \,dx \,dy = \int_^\infty f_x g_\big, _^ \,dy - \int_^\int_^f_x g_ \,dx \,dy = - \int_^\infty \int_^\infty f_x g_ \,dx \,dy since f_x g_ vanishes at infinity. Integrating by parts again results in \int_^\infty \int_^\infty f g_ \,dx \,dy. So integrating by parts twice for each term of () yields : \langle f,g\rangle = \int_^\infty \int_^\infty f (g_ + 2g_ + g_) \,dx \,dy = \int_^\infty \int_^\infty f (\Delta^2 g) \,dx \,dy. Since (\Delta^m f)(\mathbf) = \sum_^N w_i C_\delta(\mathbf), () shows that : \begin \langle f,f\rangle &= (-1)^m \int_ f(\mathbf) \sum_^N w_i (-1)^m C_\delta(\mathbf) \,d\mathbf = (-1)^m C_ \sum_^N w_i f(\mathbf_i) \\ &= (-1)^m C_ \sum_^N \sum_^N w_i w_j \varphi(\mathbf_i - \mathbf_j) = (-1)^m C_ \mathbf^ A \mathbf. \end So if \sum w = 0 and \sum w\mathbf = 0 , Now the origin of the constraints B^\mathbf = 0 can be explained. Here B is a generalization of the B defined above to possibly include monomials up to degree m-1. In other words, B=\begin 1 & 1 & \dots & 1\\ \mathbf_1 & \mathbf_2 & \dots & \mathbf_N \\ \vdots & \vdots & \dots & \vdots \\ \mathbf_1^ & \mathbf_2^ & \dots & \mathbf_N^ \end ^ where \mathbf_i^j is a column vector of all degree j monomials of the coordinates of \mathbf_i. The top half of () is equivalent to A\mathbf + B\mathbf - \mathbf = 0. So to obtain a smoothing spline, one should minimize the scalar field F:\mathbb^\rightarrow \mathbb defined by : F(\mathbf, \mathbf) = , A\mathbf + B\mathbf - \mathbf, ^2 + \lambda C \mathbf^ A \mathbf. The equations : \frac = 2 A_ (A\mathbf + B\mathbf - \mathbf) + 2\lambda C A_\mathbf=0 \quad \textrm \ i=1,2,\ldots,N and : \frac = 2 B^_ (A\mathbf + B\mathbf - \mathbf)=0 \quad \textrm \ i=1,2,\ldots,d+1 (where A_ denotes row i of A) are equivalent to the two systems of linear equations A(A\mathbf + B\mathbf - \mathbf +\lambda C \mathbf) = 0 and B^(A\mathbf + B\mathbf - \mathbf) = 0. Since A is invertible, the first system is equivalent to A\mathbf + B\mathbf - \mathbf +\lambda C \mathbf = 0. So the first system implies the second system is equivalent to B^\mathbf = 0. Just as in the previous smoothing spline coefficient derivation, the top half of () becomes (A+\lambda C I)\mathbf + B\mathbf = \mathbf. This derivation of the polyharmonic smoothing spline equation system did not assume the constraints necessary to guarantee that \int_ , \nabla^m f, ^2 \,d\mathbf = C w^Aw. But the constraints necessary to guarantee this, \sum w = 0 and \sum w \mathbf = 0 , are a subset of B^w=0 which is true for the critical point w of F. So \int_ , \nabla^m f, ^2 \,d\mathbf = C w^Aw is true for the f formed from the solution of the polyharmonic smoothing spline equation system. Because the integral is positive for all w\neq 0, the linear transformation resulting from the restriction of the domain of linear transformation A to w such that B^T w = 0 must be positive definite. This fact enables transforming the polyharmonic smoothing spline equation system to a symmetric positive definite system of equations that can be solved twice as fast using the Cholesky decomposition.


Examples

The next figure shows the interpolation through four points (marked by "circles") using different types of polyharmonic splines. The "curvature" of the interpolated curves grows with the order of the spline and the extrapolation at the left boundary (''x'' < 0) is reasonable. The figure also includes the radial basis functions ''φ'' = exp(−''r''2) which gives a good interpolation as well. Finally, the figure includes also the non-polyharmonic spline phi = r2 to demonstrate, that this radial basis function is not able to pass through the predefined points (the linear equation has no solution and is solved in a least squares sense). The next figure shows the same interpolation as in the first figure, with the only exception that the points to be interpolated are scaled by a factor of 100 (and the case phi = r2 is no longer included). Since ''φ'' = (scale·''r'')''k'' = (scale''k'')·''r''''k'', the factor (scale''k'') can be extracted from matrix A of the linear equation system and therefore the solution is not influenced by the scaling. This is different for the logarithmic form of the spline, although the scaling has not much influence. This analysis is reflected in the figure, where the interpolation shows not much differences. Note, for other radial basis functions, such as ''φ'' = exp(−''kr''2) with ''k'' = 1, the interpolation is no longer reasonable and it would be necessary to adapt ''k''. The next figure shows the same interpolation as in the first figure, with the only exception that the polynomial term of the function is not taken into account (and the case phi = r2 is no longer included). As can be seen from the figure, the extrapolation for ''x'' < 0 is no longer as "natural" as in the first figure for some of the basis functions. This indicates, that the polynomial term is useful if extrapolation occurs.


See also

*
Inverse distance weighting Inverse distance weighting (IDW) is a type of Deterministic algorithm, deterministic method for multivariate interpolation with a known homogeneously scattered set of points. The assigned values to unknown points are calculated with a Weighted m ...
*
Radial basis function In mathematics a radial basis function (RBF) is a real-valued function \varphi whose value depends only on the distance between the input and some fixed point, either the origin, so that \varphi(\mathbf) = \hat\varphi(\left\, \mathbf\right\, ), o ...
*
Subdivision surface In the field of 3D computer graphics, a subdivision surface (commonly shortened to SubD surface or Subsurf) is a curved Computer representation of surfaces, surface represented by the specification of a coarser polygon mesh and produced by a re ...
(emerging alternative to spline-based surfaces) * Spline


References

{{Reflist Splines (mathematics) Interpolation Multivariate interpolation


External links

Computer Code
Polyharmonic Spline
''An interactive example with Matlab/Octave source code''