Radial basis function

A radial basis function (RBF) is a real-valued function whose value depends only on the distance from the origin, so that ; or alternatively on the distance from some other point c, called a center, so that . Any function that satisfies the property is a radial function. The norm is usually Euclidean distance, although other distance functions are also possible.

Sums of radial basis functions are typically used to approximate given functions. This approximation process can also be interpreted as a simple kind of neural network; this was the context in which they originally surfaced, in work by David Broomhead and David Lowe in 1988,[1][2] which stemmed from Michael J. D. Powell's seminal research from 1977.[3][4][5] RBFs are also used as a kernel in support vector classification.[6]

Types

Commonly used types of radial basis functions include (writing ):

Approximation

Main article: Kernel smoothing

Radial basis functions are typically used to build up function approximations of the form

where the approximating function y(x) is represented as a sum of N radial basis functions, each associated with a different center xi, and weighted by an appropriate coefficient wi. The weights wi can be estimated using the matrix methods of linear least squares, because the approximating function is linear in the weights.

Approximation schemes of this kind have been particularly used in time series prediction and control of nonlinear systems exhibiting sufficiently simple chaotic behaviour, 3D reconstruction in computer graphics (for example, hierarchical RBF and Pose Space Deformation).

RBF Network

Two unnormalized Gaussian radial basis functions in one input dimension. The basis function centers are located at x1=0.75 and x2=3.25.

The sum

can also be interpreted as a rather simple single-layer type of artificial neural network called a radial basis function network, with the radial basis functions taking on the role of the activation functions of the network. It can be shown that any continuous function on a compact interval can in principle be interpolated with arbitrary accuracy by a sum of this form, if a sufficiently large number N of radial basis functions is used.

The approximant y(x) is differentiable with respect to the weights wi. The weights could thus be learned using any of the standard iterative methods for neural networks.

Using radial basis functions in this manner yields a reasonable interpolation approach provided that the fitting set has been chosen such that it covers the entire range systematically (equidistant data points are ideal). However, without a polynomial term that is orthogonal to the radial basis functions, estimates outside the fitting set tend to perform poorly.

See also

References

  1. Radial Basis Function networks
  2. Broomhead, David H.; Lowe, David (1988). "Multivariable Functional Interpolation and Adaptive Networks" (PDF). Complex Systems. 2: 321–355. Archived from the original (PDF) on 2014-07-14.
  3. Michael J. D. Powell (1977). "Restart procedures for the conjugate gradient method" (PDF). Mathematical Programming. Springer. 12 (1): 241–254. doi:10.1007/bf01593790.
  4. Sahin, Ferat (1997). A Radial Basis Function Approach to a Color Image Classification Problem in a Real Time Industrial Application (PDF) (M.Sc.). Virginia Tech. p. 26. Radial basis functions were first introduced by Powell to solve the real multivariate interpolation problem.
  5. Broomhead & Lowe 1988, p. 347: "We would like to thank Professor M.J.D. Powell at the Department of Applied Mathematics and Theoretical Physics at Cambridge University for providing the initial stimulus for this work."
  6. VanderPlas, Jake (6 May 2015). "Introduction to Support Vector Machines". [O'Reilly]. Retrieved 14 May 2015.

Further reading

This article is issued from Wikipedia - version of the 9/29/2016. The text is available under the Creative Commons Attribution/Share Alike but additional terms may apply for the media files.