Information Technology Reference
In-Depth Information
that is considered to be the birth of a new category of neural networks, named
radial basis function networks .
The new category of networks was enthusiastically welcomed by the neural
network society because the new networks have demonstrated the improved
capability of solving pattern separation and classification problems.
Backpropagation networks, in spite of their universal approximation capability, fail
to be reliable pattern classifiers. This is because during the training phase
multilayer perceptron networks build strictly separating hyperplanes that exactly
classify the given examples, so that the new, unknown examples are randomly
classified. This is a consequence of using the sigmoidal function as the network
activation function with its resemblance to the unit step function, which is a global
function. Also, the sigmoidal function, since it belongs to the set of monotonic
basis functions , has a slowly decaying behaviour in a large area of it's arguments.
Therefore, the networks using this kind of activation function can reach a very
good overall approximation quality in the large area of arguments; however, they
cannot exactly reproduce the function values at the given points. For this one needs
locally restricted basis functions , such as a Gaussian function , bell-shaped
function , wavelets or the B-spline functions .
The locally restricted functions can be centred with the exact values at some
selected argument values. The function values around these selected argument
positions can decay relatively fast, controlled by the approximation algorithm.
Powel (1988) suggested that the locally restricted basis functions should generally
have the form
n
Fx
()
¦
w
M
x x
,
i
i
i
1
where
is a set of nonlinear functions relying on the Euclidean distance
M
x
x
i
x x . Moody and Darken (1989) selected for their radial basis function networks
the exponential activation function
i
2
§
·
xc
i
i
F
exp
¨
¸
,
i
¨
2
¸
V
©
¹
i
which is similar to the Gaussian density function centred at
c The function spread
V around the centre determines the ratio of the function decay with its distance
from the centre.
The common configuration of an RBF network firmly consists of three layers
(Figure 3.5): the input layer, the hidden layer, and the output layer. In the neurons
of hidden layer the activation functions are placed. The input layer of the network
is directly connected with the hidden layer of the network, so that only the
connections between the hidden layer and the output layer are weighted. As a
consequence, the training procedure here is entirely different from that in the
backpropagation networks. The most important issue here is the selection for each
.
Search WWH ::




Custom Search