HOME

TheInfoList



OR:

In the field of
mathematical modeling A mathematical model is an abstract and concrete, abstract description of a concrete system using mathematics, mathematical concepts and language of mathematics, language. The process of developing a mathematical model is termed ''mathematical m ...
, a radial basis function network is an
artificial neural network In machine learning, a neural network (also artificial neural network or neural net, abbreviated ANN or NN) is a computational model inspired by the structure and functions of biological neural networks. A neural network consists of connected ...
that uses
radial basis function In mathematics a radial basis function (RBF) is a real-valued function \varphi whose value depends only on the distance between the input and some fixed point, either the origin, so that \varphi(\mathbf) = \hat\varphi(\left\, \mathbf\right\, ), o ...
s as
activation function The activation function of a node in an artificial neural network is a function that calculates the output of the node based on its individual inputs and their weights. Nontrivial problems can be solved using only a few nodes if the activation f ...
s. The output of the network is a
linear combination In mathematics, a linear combination or superposition is an Expression (mathematics), expression constructed from a Set (mathematics), set of terms by multiplying each term by a constant and adding the results (e.g. a linear combination of ''x'' a ...
of radial basis functions of the inputs and neuron parameters. Radial basis function networks have many uses, including
function approximation In general, a function approximation problem asks us to select a function (mathematics), function among a that closely matches ("approximates") a in a task-specific way. The need for function approximations arises in many branches of applied ...
, time series prediction,
classification Classification is the activity of assigning objects to some pre-existing classes or categories. This is distinct from the task of establishing the classes themselves (for example through cluster analysis). Examples include diagnostic tests, identif ...
, and system control. They were first formulated in a 1988 paper by Broomhead and Lowe, both researchers at the
Royal Signals and Radar Establishment The Royal Signals and Radar Establishment (RSRE) was a scientific research establishment within the Ministry of Defence (MoD) of the United Kingdom. It was located primarily at Malvern in Worcestershire, England. The RSRE motto was ''Ubique ...
.


Network architecture

Radial basis function (RBF) networks typically have three layers: an input layer, a hidden layer with a non-linear RBF activation function and a linear output layer. The input can be modeled as a vector of real numbers \mathbf \in \mathbb^n. The output of the network is then a scalar function of the input vector, \varphi : \mathbb^n \to \mathbb , and is given by :\varphi(\mathbf) = \sum_^N a_i \rho(, , \mathbf-\mathbf_i, , ) where N is the number of neurons in the hidden layer, \mathbf c_i is the center vector for neuron i, and a_i is the weight of neuron i in the linear output neuron. Functions that depend only on the distance from a center vector are radially symmetric about that vector, hence the name radial basis function. In the basic form, all inputs are connected to each hidden neuron. The
norm Norm, the Norm or NORM may refer to: In academic disciplines * Normativity, phenomenon of designating things as good or bad * Norm (geology), an estimate of the idealised mineral content of a rock * Norm (philosophy), a standard in normative e ...
is typically taken to be the
Euclidean distance In mathematics, the Euclidean distance between two points in Euclidean space is the length of the line segment between them. It can be calculated from the Cartesian coordinates of the points using the Pythagorean theorem, and therefore is o ...
(although the
Mahalanobis distance The Mahalanobis distance is a distance measure, measure of the distance between a point P and a probability distribution D, introduced by Prasanta Chandra Mahalanobis, P. C. Mahalanobis in 1936. The mathematical details of Mahalanobis distance ...
appears to perform better with pattern recognition) and the radial basis function is commonly taken to be Gaussian : \rho \big ( \left \Vert \mathbf - \mathbf_i \right \Vert \big ) = \exp \left -\beta_i \left \Vert \mathbf - \mathbf_i \right \Vert ^2 \right. The Gaussian basis functions are local to the center vector in the sense that :\lim_\rho(\left \Vert \mathbf - \mathbf_i \right \Vert) = 0 i.e. changing parameters of one neuron has only a small effect for input values that are far away from the center of that neuron. Given certain mild conditions on the shape of the activation function, RBF networks are universal approximators on a
compact Compact as used in politics may refer broadly to a pact or treaty; in more specific cases it may refer to: * Interstate compact, a type of agreement used by U.S. states * Blood compact, an ancient ritual of the Philippines * Compact government, a t ...
subset of \mathbb^n. This means that an RBF network with enough hidden neurons can approximate any continuous function on a closed, bounded set with arbitrary precision. The parameters a_i , \mathbf_i , and \beta_i are determined in a manner that optimizes the fit between \varphi and the data.


Normalization


Normalized architecture

In addition to the above ''unnormalized'' architecture, RBF networks can be ''normalized''. In this case the mapping is : \varphi ( \mathbf ) \ \stackrel\ \frac = \sum_^N a_i u \big ( \left \Vert \mathbf - \mathbf_i \right \Vert \big ) where : u \big ( \left \Vert \mathbf - \mathbf_i \right \Vert \big ) \ \stackrel\ \frac is known as a ''normalized radial basis function''.


Theoretical motivation for normalization

There is theoretical justification for this architecture in the case of stochastic data flow. Assume a
stochastic kernel In probability theory, a Markov kernel (also known as a stochastic kernel or probability kernel) is a map that in the general theory of Markov processes plays the role that the transition matrix does in the theory of Markov processes with a finit ...
approximation for the joint probability density : P\left ( \mathbf \land y \right ) = \sum_^N \, \rho \big ( \left \Vert \mathbf - \mathbf_i \right \Vert \big ) \, \sigma \big ( \left \vert y - e_i \right \vert \big ) where the weights \mathbf_i and e_i are exemplars from the data and we require the kernels to be normalized : \int \rho \big ( \left \Vert \mathbf - \mathbf_i \right \Vert \big ) \, d^n\mathbf =1 and : \int \sigma \big ( \left \vert y - e_i \right \vert \big ) \, dy =1. The probability densities in the input and output spaces are : P \left ( \mathbf \right ) = \int P \left ( \mathbf \land y \right ) \, dy = \sum_^N \, \rho \big ( \left \Vert \mathbf - \mathbf_i \right \Vert \big ) and : The expectation of y given an input \mathbf is : \varphi \left ( \mathbf \right ) \ \stackrel\ E\left ( y \mid \mathbf \right ) = \int y \, P\left ( y \mid \mathbf \right ) dy where : P\left ( y \mid \mathbf \right ) is the conditional probability of y given \mathbf . The conditional probability is related to the joint probability through
Bayes' theorem Bayes' theorem (alternatively Bayes' law or Bayes' rule, after Thomas Bayes) gives a mathematical rule for inverting Conditional probability, conditional probabilities, allowing one to find the probability of a cause given its effect. For exampl ...
: P\left ( y \mid \mathbf \right ) = \frac which yields : \varphi \left ( \mathbf \right ) = \int y \, \frac \, dy . This becomes : \varphi \left ( \mathbf \right ) = \frac = \sum_^N e_i u \big ( \left \Vert \mathbf - \mathbf_i \right \Vert \big ) when the integrations are performed.


Local linear models

It is sometimes convenient to expand the architecture to include local linear models. In that case the architectures become, to first order, : \varphi \left ( \mathbf \right ) = \sum_^N \left ( a_i + \mathbf_i \cdot \left ( \mathbf - \mathbf_i \right ) \right )\rho \big ( \left \Vert \mathbf - \mathbf_i \right \Vert \big ) and : \varphi \left ( \mathbf \right ) = \sum_^N \left ( a_i + \mathbf_i \cdot \left ( \mathbf - \mathbf_i \right ) \right )u \big ( \left \Vert \mathbf - \mathbf_i \right \Vert \big ) in the unnormalized and normalized cases, respectively. Here \mathbf_i are weights to be determined. Higher order linear terms are also possible. This result can be written : \varphi \left ( \mathbf \right ) = \sum_^ \sum_^n e_ v_ \big ( \mathbf - \mathbf_i \big ) where : e_ = \begin a_i, & \mbox i \in ,N\\ b_, & \mboxi \in +1,2N\end and : v_\big ( \mathbf - \mathbf_i \big ) \ \stackrel\ \begin \delta_ \rho \big ( \left \Vert \mathbf - \mathbf_i \right \Vert \big ) , & \mbox i \in ,N\\ \left ( x_ - c_ \right ) \rho \big ( \left \Vert \mathbf - \mathbf_i \right \Vert \big ) , & \mboxi \in +1,2N\end in the unnormalized case and in the normalized case. Here \delta_ is a Kronecker delta function defined as : \delta_ = \begin 1, & \mboxi = j \\ 0, & \mboxi \ne j \end .


Training

RBF networks are typically trained from pairs of input and target values \mathbf(t), y(t), t = 1, \dots, T by a two-step algorithm. In the first step, the center vectors \mathbf c_i of the RBF functions in the hidden layer are chosen. This step can be performed in several ways; centers can be randomly sampled from some set of examples, or they can be determined using
k-means clustering ''k''-means clustering is a method of vector quantization, originally from signal processing, that aims to partition of a set, partition ''n'' observations into ''k'' clusters in which each observation belongs to the cluster (statistics), cluste ...
. Note that this step is unsupervised. The second step simply fits a linear model with coefficients w_i to the hidden layer's outputs with respect to some objective function. A common objective function, at least for regression/function estimation, is the least squares function: : K( \mathbf ) \ \stackrel\ \sum_^T K_t( \mathbf ) where : K_t( \mathbf ) \ \stackrel\ \big y(t) - \varphi \big ( \mathbf(t), \mathbf \big ) \big 2 . We have explicitly included the dependence on the weights. Minimization of the least squares objective function by optimal choice of weights optimizes accuracy of fit. There are occasions in which multiple objectives, such as smoothness as well as accuracy, must be optimized. In that case it is useful to optimize a regularized objective function such as : H( \mathbf ) \ \stackrel\ K( \mathbf ) + \lambda S( \mathbf ) \ \stackrel\ \sum_^T H_t( \mathbf ) where : S( \mathbf ) \ \stackrel\ \sum_^T S_t( \mathbf ) and : H_t( \mathbf ) \ \stackrel\ K_t ( \mathbf ) + \lambda S_t ( \mathbf ) where optimization of S maximizes smoothness and \lambda is known as a
regularization Regularization may refer to: * Regularization (linguistics) * Regularization (mathematics) * Regularization (physics) * Regularization (solid modeling) * Regularization Law, an Israeli law intended to retroactively legalize settlements See also ...
parameter. A third optional
backpropagation In machine learning, backpropagation is a gradient computation method commonly used for training a neural network to compute its parameter updates. It is an efficient application of the chain rule to neural networks. Backpropagation computes th ...
step can be performed to fine-tune all of the RBF net's parameters.


Interpolation

RBF networks can be used to interpolate a function y: \mathbb^n \to \mathbb when the values of that function are known on finite number of points: y(\mathbf x_i) = b_i, i=1, \ldots, N. Taking the known points \mathbf x_i to be the centers of the radial basis functions and evaluating the values of the basis functions at the same points g_ = \rho(, , \mathbf x_j - \mathbf x_i , , ) the weights can be solved from the equation :\left \begin g_ & g_ & \cdots & g_ \\ g_ & g_ & \cdots & g_ \\ \vdots & & \ddots & \vdots \\ g_ & g_ & \cdots & g_ \end\right\left \begin w_1 \\ w_2 \\ \vdots \\ w_N \end \right= \left \begin b_1 \\ b_2 \\ \vdots \\ b_N \end \right/math> It can be shown that the interpolation matrix in the above equation is non-singular, if the points \mathbf x_i are distinct, and thus the weights w can be solved by simple linear algebra: :\mathbf = \mathbf^ \mathbf where G = (g_).


Function approximation

If the purpose is not to perform strict interpolation but instead more general
function approximation In general, a function approximation problem asks us to select a function (mathematics), function among a that closely matches ("approximates") a in a task-specific way. The need for function approximations arises in many branches of applied ...
or
classification Classification is the activity of assigning objects to some pre-existing classes or categories. This is distinct from the task of establishing the classes themselves (for example through cluster analysis). Examples include diagnostic tests, identif ...
the optimization is somewhat more complex because there is no obvious choice for the centers. The training is typically done in two phases first fixing the width and centers and then the weights. This can be justified by considering the different nature of the non-linear hidden neurons versus the linear output neuron.


Training the basis function centers

Basis function centers can be randomly sampled among the input instances or obtained by Orthogonal Least Square Learning Algorithm or found by clustering the samples and choosing the cluster means as the centers. The RBF widths are usually all fixed to same value which is proportional to the maximum distance between the chosen centers.


Pseudoinverse solution for the linear weights

After the centers c_i have been fixed, the weights that minimize the error at the output can be computed with a linear
pseudoinverse In mathematics, and in particular, algebra, a generalized inverse (or, g-inverse) of an element ''x'' is an element ''y'' that has some properties of an inverse element but not necessarily all of them. The purpose of constructing a generalized inv ...
solution: :\mathbf = \mathbf^+ \mathbf, where the entries of ''G'' are the values of the radial basis functions evaluated at the points x_i: g_ = \rho(, , x_j-c_i, , ). The existence of this linear solution means that unlike multi-layer perceptron (MLP) networks, RBF networks have an explicit minimizer (when the centers are fixed).


Gradient descent training of the linear weights

Another possible training algorithm is
gradient descent Gradient descent is a method for unconstrained mathematical optimization. It is a first-order iterative algorithm for minimizing a differentiable multivariate function. The idea is to take repeated steps in the opposite direction of the gradi ...
. In gradient descent training, the weights are adjusted at each time step by moving them in a direction opposite from the gradient of the objective function (thus allowing the minimum of the objective function to be found), : \mathbf(t+1) = \mathbf(t) - \nu \frac H_t(\mathbf) where \nu is a "learning parameter." For the case of training the linear weights, a_i , the algorithm becomes : a_i (t+1) = a_i(t) + \nu \big y(t) - \varphi \big ( \mathbf(t), \mathbf \big ) \big \rho \big ( \left \Vert \mathbf(t) - \mathbf_i \right \Vert \big ) in the unnormalized case and : a_i (t+1) = a_i(t) + \nu \big y(t) - \varphi \big ( \mathbf(t), \mathbf \big ) \big u \big ( \left \Vert \mathbf(t) - \mathbf_i \right \Vert \big ) in the normalized case. For local-linear-architectures gradient-descent training is : e_ (t+1) = e_(t) + \nu \big y(t) - \varphi \big ( \mathbf(t), \mathbf \big ) \big v_ \big ( \mathbf(t) - \mathbf_i \big )


Projection operator training of the linear weights

For the case of training the linear weights, a_i and e_ , the algorithm becomes : a_i (t+1) = a_i(t) + \nu \big y(t) - \varphi \big ( \mathbf(t), \mathbf \big ) \big \frac in the unnormalized case and : a_i (t+1) = a_i(t) + \nu \big y(t) - \varphi \big ( \mathbf(t), \mathbf \big ) \big \frac in the normalized case and : e_ (t+1) = e_(t) + \nu \big y(t) - \varphi \big ( \mathbf(t), \mathbf \big ) \big \frac in the local-linear case. For one basis function, projection operator training reduces to
Newton's method In numerical analysis, the Newton–Raphson method, also known simply as Newton's method, named after Isaac Newton and Joseph Raphson, is a root-finding algorithm which produces successively better approximations to the roots (or zeroes) of a ...
.


Examples


Logistic map

The basic properties of radial basis functions can be illustrated with a simple mathematical map, the logistic map, which maps the unit interval onto itself. It can be used to generate a convenient prototype data stream. The logistic map can be used to explore
function approximation In general, a function approximation problem asks us to select a function (mathematics), function among a that closely matches ("approximates") a in a task-specific way. The need for function approximations arises in many branches of applied ...
, time series prediction, and
control theory Control theory is a field of control engineering and applied mathematics that deals with the control system, control of dynamical systems in engineered processes and machines. The objective is to develop a model or algorithm governing the applic ...
. The map originated from the field of
population dynamics Population dynamics is the type of mathematics used to model and study the size and age composition of populations as dynamical systems. Population dynamics is a branch of mathematical biology, and uses mathematical techniques such as differenti ...
and became the prototype for chaotic time series. The map, in the fully chaotic regime, is given by : x(t+1)\ \stackrel\ f\left x(t)\right = 4 x(t) \left 1-x(t) \right where t is a time index. The value of x at time t+1 is a parabolic function of x at time t. This equation represents the underlying geometry of the chaotic time series generated by the logistic map. Generation of the time series from this equation is the forward problem. The examples here illustrate the
inverse problem An inverse problem in science is the process of calculating from a set of observations the causal factors that produced them: for example, calculating an image in X-ray computed tomography, sound source reconstruction, source reconstruction in ac ...
; identification of the underlying dynamics, or fundamental equation, of the logistic map from exemplars of the time series. The goal is to find an estimate : x(t+1) = f \left x(t) \right \approx \varphi(t) = \varphi \left x(t)\right for f.


Function approximation


Unnormalized radial basis functions

The architecture is : \varphi ( \mathbf ) \ \stackrel\ \sum_^N a_i \rho \big ( \left \Vert \mathbf - \mathbf_i \right \Vert \big ) where : \rho \big ( \left \Vert \mathbf - \mathbf_i \right \Vert \big ) = \exp \left -\beta_i \left \Vert \mathbf - \mathbf_i \right \Vert ^2 \right= \exp \left -\beta_i \left ( x(t) - c_i \right ) ^2 \right. Since the input is a scalar rather than a
vector Vector most often refers to: * Euclidean vector, a quantity with a magnitude and a direction * Disease vector, an agent that carries and transmits an infectious pathogen into another living organism Vector may also refer to: Mathematics a ...
, the input dimension is one. We choose the number of basis functions as N=5 and the size of the training set to be 100 exemplars generated by the chaotic time series. The weight \beta is taken to be a constant equal to 5. The weights c_i are five exemplars from the time series. The weights a_i are trained with projection operator training: : a_i (t+1) = a_i(t) + \nu \big x(t+1) - \varphi \big ( \mathbf(t), \mathbf \big ) \big \frac where the
learning rate In machine learning and statistics, the learning rate is a tuning parameter in an optimization algorithm that determines the step size at each iteration while moving toward a minimum of a loss function. Since it influences to what extent newly ...
\nu is taken to be 0.3. The training is performed with one pass through the 100 training points. The rms error is 0.15.


Normalized radial basis functions

The normalized RBF architecture is : \varphi ( \mathbf ) \ \stackrel\ \frac = \sum_^N a_i u \big ( \left \Vert \mathbf - \mathbf_i \right \Vert \big ) where : u \big ( \left \Vert \mathbf - \mathbf_i \right \Vert \big ) \ \stackrel\ \frac . Again: : \rho \big ( \left \Vert \mathbf - \mathbf_i \right \Vert \big ) = \exp \left -\beta \left \Vert \mathbf - \mathbf_i \right \Vert ^2 \right= \exp \left -\beta \left ( x(t) - c_i \right ) ^2 \right. Again, we choose the number of basis functions as five and the size of the training set to be 100 exemplars generated by the chaotic time series. The weight \beta is taken to be a constant equal to 6. The weights c_i are five exemplars from the time series. The weights a_i are trained with projection operator training: : a_i (t+1) = a_i(t) + \nu \big x(t+1) - \varphi \big ( \mathbf(t), \mathbf \big ) \big \frac where the
learning rate In machine learning and statistics, the learning rate is a tuning parameter in an optimization algorithm that determines the step size at each iteration while moving toward a minimum of a loss function. Since it influences to what extent newly ...
\nu is again taken to be 0.3. The training is performed with one pass through the 100 training points. The rms error on a test set of 100 exemplars is 0.084, smaller than the unnormalized error. Normalization yields accuracy improvement. Typically accuracy with normalized basis functions increases even more over unnormalized functions as input dimensionality increases.


Time series prediction

Once the underlying geometry of the time series is estimated as in the previous examples, a prediction for the time series can be made by iteration: : \varphi(0) = x(1) : (t) \approx \varphi(t-1) : (t+1) \approx \varphi(t)=\varphi varphi(t-1)/math>. A comparison of the actual and estimated time series is displayed in the figure. The estimated times series starts out at time zero with an exact knowledge of x(0). It then uses the estimate of the dynamics to update the time series estimate for several time steps. Note that the estimate is accurate for only a few time steps. This is a general characteristic of chaotic time series. This is a property of the sensitive dependence on initial conditions common to chaotic time series. A small initial error is amplified with time. A measure of the divergence of time series with nearly identical initial conditions is known as the Lyapunov exponent.


Control of a chaotic time series

We assume the output of the logistic map can be manipulated through a control parameter c x(t),t such that : ^_(t+1) = 4 x(t) -x(t)+c (t),t. The goal is to choose the control parameter in such a way as to drive the time series to a desired output d(t) . This can be done if we choose the control parameter to be : c^_ (t),t\ \stackrel\ -\varphi (t)+ d(t+1) where : y (t)\approx f (t)= x(t+1)- c (t),t is an approximation to the underlying natural dynamics of the system. The learning algorithm is given by : a_i (t+1) = a_i(t) + \nu \varepsilon \frac where : \varepsilon \ \stackrel\ f (t)- \varphi (t)= x(t+1)- c (t),t- \varphi (t)= x(t+1) - d(t+1) .


See also

* Radial basis function kernel * instance-based learning * In Situ Adaptive Tabulation *
Predictive analytics Predictive analytics encompasses a variety of Statistics, statistical techniques from data mining, Predictive modelling, predictive modeling, and machine learning that analyze current and historical facts to make predictions about future or other ...
*
Chaos theory Chaos theory is an interdisciplinary area of Scientific method, scientific study and branch of mathematics. It focuses on underlying patterns and Deterministic system, deterministic Scientific law, laws of dynamical systems that are highly sens ...
* Hierarchical RBF * Cerebellar model articulation controller * Instantaneously trained neural networks *
Support vector machine In machine learning, support vector machines (SVMs, also support vector networks) are supervised max-margin models with associated learning algorithms that analyze data for classification and regression analysis. Developed at AT&T Bell Laborato ...


References


Further reading

* J. Moody and C. J. Darken, "Fast learning in networks of locally tuned processing units," Neural Computation, 1, 281-294 (1989). Also se
Radial basis function networks according to Moody and Darken
* T. Poggio and F. Girosi,
Networks for approximation and learning
" Proc. IEEE 78(9), 1484-1487 (1990). * Roger D. Jones, Y. C. Lee, C. W. Barnes, G. W. Flake, K. Lee, P. S. Lewis, and S. Qian
Function approximation and time series prediction with neural networks
Proceedings of the International Joint Conference on Neural Networks, June 17–21, p. I-649 (1990). * * * * {{cite book , author=Simon Haykin , title=Neural Networks: A Comprehensive Foundation , edition=2nd , location=Upper Saddle River, NJ , publisher=Prentice Hall, year=1999 , isbn=0-13-908385-5 * S. Chen, C. F. N. Cowan, and P. M. Grant,
Orthogonal Least Squares Learning Algorithm for Radial Basis Function Networks
, IEEE Transactions on Neural Networks, Vol 2, No 2 (Mar) 1991. Neural network architectures Computational statistics Classification algorithms Machine learning algorithms Regression analysis