# Radial basis function

(diff) ← Older revision | Latest revision (diff) | Newer revision → (diff)

The radial basis function method is a multi-variable scheme for function interpolation, i.e. the goal is to approximate a continuous function $f$ by a relatively simple interpolant $s$ which meets $f$ at a certain number (usually finite) of prescribed points (cf. also Approximation of functions; Interpolation). In the $n$-dimensional real space $\mathbb R^n$, given a continuous function $f:\mathbb{R}^n\to\mathbb R$ and so-called centres $x_j\in\mathbb R^n$, $j=1,2,\dots,m$ the interpolant to $f$ at the centres reads $$s(x)=\sum\limits_{j=1}^m\lambda_j\phi(\|x-x_j\|),\quad x\in\mathbb R^n,$$ where $\phi:\mathbb R_+\to\mathbb R$ is the radial basis function, the norm is the $n$-dimensional Euclidean norm and the real coefficients $\lambda_j$ are fixed through the interpolation conditions $$s(x_j)=f(x_j),\quad j=1,\dots,m.$$

Norms $\|\cdot\|$ other than Euclidean are possible in principle, but rarely used. In particular, the remarkable existence properties described below are usually no longer guaranteed if the norm is not Euclidean.

Examples of radial basis functions are the multi-quadric function $\phi(r)=\sqrt{r^2+c^2}$, $c$ a positive parameter [a7], which is known to be particularly useful in applications, the thin-plate spline $\phi(r)=r^2\log(r)$ [a6], the Gaussian function $\phi(r)=\exp(-c^2r^2)$, and the linear radial basis function $\phi(r)=r$.

For the thin-plate spline and several other radial basis functions, a linear (generally, low-order) polynomial has to be added to $s$ with side conditions $\sum_{j=1}^m\lambda_j=\sum_{j=1}^m\lambda_jx_j=0$ in order to be able to solve the interpolation equations uniquely. In that case, the centres must not lie on a straight line, but may otherwise be arbitrarily distributed ( "scattered" ). For multi-quadrics, Gaussian and linear radial functions, among others, the extra geometric condition is not needed: the interpolation linear system defined through the above interpolation conditions is uniquely solvable for all $m>1$ and $n$ if the centres are distinct [a9]. This is one of the most striking and useful features of radial basis function interpolation. In fact, for large classes of radial basis functions, which contain all the examples mentioned, the matrix which defines the coefficients through the interpolation conditions is conditionally positive definite (or conditionally negative definite) [a9], which means that it is positive (negative) definite on a subspace of $\mathbb R^m$ with small co-dimension.

See, for instance, [a10] or [a5] for reviews of this method. For the history of the method see [a7].

Besides the question of existence and uniqueness outlined above, the question of (uniform) convergence (cf. also Uniform convergence) of $s$ to $f$ when the centres become dense in a domain or on $\mathbb R^n$ is of central importance. J. Duchon [a6] has studied this issue for scattered centres $x_j$ in a Lipschitz domain $\Omega\subset\mathbb R^n$ for thin-plate splines and proved uniform convergence provided $\partial\Omega$ satisfies a cone condition, the $x_j$ become dense in $\Omega$ and $f$ is sufficiently smooth. His work was generalized to multi-quadrics, Gaussians and others (see, for instance [a13], [a8]), while the question of uniform convergence and approximation order on infinite square grids of spacing $h>0$ was settled in [a2]. Estimates for the interpolation error when $h\to 0$ have been given (see [a2]) and provide error bounds of order $O(h^{n+1})$ in $n$ dimensions for the linear radial basis function $\phi(r)=r$, for example, if $f$ is sufficiently smooth.

The remarkable convergence orders which occur, together with the above existence theorems, make the radial basis function method attractive if $n$ is large, especially when the centres are scattered, because in that case other schemes, such as polynomial interpolation (cf. e.g. Algebraic polynomial of best approximation), are often ruled out.

Since most of the radial basis functions are globally supported (however, see [a12] or [a4] for compactly supported ones), special attention is needed in the computation of the approximants, in particular if $m$ is large. Major contributions to this aspect can be found in [a11] and [a1], which include working software admitting efficient computation of the desired coefficients $\lambda_j$ for $m=50000$ and larger. Thin-plate splines and multi-quadrics for $n=2,3,4$ have also received consideration in implementations.

Given the accuracy and availability of the methods for arbitrary $n$ and $m$, other approximation schemes (not interpolation) such as wavelet schemes [a3], quasi-interpolation or least-squares approaches have been studied and used successfully, but the real advantage of the scheme remains in its availability for multi-variable interpolation to scattered data. The applications range from modelling the Earth's surface [a7] to optimization problems and applications in the numerical solutions of partial differential equations in high dimensions.