An RBF or a Radial Basis Function neural network contains a layer of input, a concealed layer and a layer which contains the output. There are neurons present in these layers that contain Gaussian transfer functions. The outputs of these neurons are in inverse proportion to the distance calculated from the neuron’s centre.
RBF networks are similar to K-Means clustering or a Probabilistic Neural Network (PNN) or General Regression Neural Networks (GRNN). The primary difference between RBFNN and PNN or GRNN networks is that PNN and GRNN have a neuron for every point in the model’s training file.
On the other hand, RBF networks have a mutable number of neurons. The number of neurons is way less than the number of training points in the model. If the training set is of small or medium size, PNN or GRNN networks are generally more precise than the RBF networks.
For large training sets, PNN or GRNN networks are useless. Therefore, there is no other option but to use RBFNN.
How do RBF Networks work?
The application of RBF is unlike any other Neural Network. Concept-wise, RBF neural networks are extremely close to the models of K-Nearest Neighbour (k-NN). The notion behind this is that a forecasted target value of that particular item is likely to be nearly equal to the other neighbours of the predictor variables. The nearest neighbour classification method depends on the number of neighbouring points in consideration.
An RBF network places one or multiple RBF neurons in the coordinate space, as mentioned by the predictor variables. This space is multi-dimensional, where the number of predictor variables decides the number of dimensions in this space. The Euclidean distance in this space is between each neutron, where distance is calculated from the centre of the neutrons. After that, an RBF, also a kernel function, is applied. Then we calculate the weight of each neuron.
Weight = RBF(distance)
The farther a neuron is from the point it is being computed, the lesser its influence is.
What is The Radial Basis Function, and what are its types?
Mathematically, a radial basis function (RBF) has a real value, Ψ, whose value depends entirely on the distance between any fixed point and the input. The point could either be the origin, which is, Ψ(x) = Ψ’(||x||) of the space coordinate or a fixed point, which is referred to as theφ ( x ) = φ ^ ( ‖ x ‖ ) is a radial function.
Different types of RBFs are used, like the most familiar, the Gaussian function. Which is the best-forecasted value of the RBF? For a new point, RBF is found by multiplying all the values you get as the output of the RBF functions by calculated neuron weights.
The RBF of a particular neuron possesses a radius and a centre. It is a distinct value for every neuron. The radius could be different in every dimension of the network.
If the spread is large, then the neurons that may be at a distance from a particular point in space could have a more significant influence than expected.
The Architecture of RBF Networks
RBF networks possess three layers:
Input layer – One neuron for every predictor variable is the trend in this layer. The input neurons feed the allocated values in each hidden layer.
Hidden layer – Variable neurons number, which is calculated in the process of the training process, is found in this particular layer. A hidden neuron calculates the test case’s Euclidean distance. Using the spread values, it is then applied to the RBF kernel function. The result is then passed to the layer of summation.
Layer of summation or summation layer – The weight of a neuron is multiplied by the value given by a neuron in the hidden layer, and the outcome is then passed to the layer of summation. Here the weighted values present the sums as the network output.
How are RBF Networks Trained?
Several methods are being used these days for training RBF networks. One of the ways of doing this is using K-means clustering to find the centre. These cluster centres are then used to find the RBF function centres. But K-means clustering requires a lot of calculation. It also does not help in generating the optimal number of centres.
Another way of training an RBF network is by using a random subset of the training points as the centres. For this method, the following factors need to be assessed:
- How many neurons does the hidden layer contain?
- What are the coordinates of the centre of every RBF function in the hidden layer?
- In each dimension, what is the radius or spread of each RBF function?
- What are the weights applied to the RBF function outputs in the summation layer?
This is a brief explanation of a Radial Basis Function neural network. To learn more about the same, you can visit the Blog section on the website of E2E Networks.