Information Technology Reference
In-Depth Information
where d is the dimension of the input vector x and c ij stands for the center
vector for the neuron. The most often used radial basis function ϕ (a i ) is
the Gaussian function:
[5.15]
where σ 2 i is the width of the basis function (Lim et al., 2003).
The response surface of a single radial unit is a Gaussian function with
a peak at its center (Takagaki et al., 2010).
The training speed for RBFNN is relatively faster compared to a
conventional MLP network trained with the BP algorithm (Rumelhart
et al., 1986). Training of the RBFNN is a two-stage process, comprising
unsupervised and supervised learning. In the unsupervised part of the
training process, only inputs (input vectors) are presented to the network,
whereas in the subsequent supervised part, desired outputs (output
vectors) are specifi ed for the inputs (Lim et al., 2003). Initial center points
of the radial basis units are determined during the fi rst phase of the
training, often using a K -means clustering algorithm (Lloyd, 1982), and
then they are further optimized. As for the second phase of the training,
cross-validation is the method often used to determine the weights
between the hidden and output layers (Bishop, 1995).
Other types of networks
The Modular Neural Network (MNN) is a neural network that has two
main branches. During the training process, branches compete against each
other, resulting in a system that is capable of better generalization. Other
types of neural networks include: Probabilistic Neural Networks (PNN),
Learning Vector Quantization Networks (LVQ), Cascade Correlation
Networks (CCN), Functional Link Networks (FLN), Kohonen Networks
(KN), Hopfi eld Neural Network (HNN), Gram-Chalier Networks (GCN),
Hebb Networks (HN), Adaline Networks (AN), Hetero-associative
Networks (HN), Hybrid Networks (HN), Holographic Associative Memory
(HAM), Spiking Neural Networks (SNN), Cascading Neural Networks
(CNN), Compositional Pattern-producing Neural Networks (CPPNN), etc.
(Zaknich, 2003; Kollias et al., 2006; Nisbet et al., 2009).
￿
￿
￿
Dynamic Neural Networks (DNN)
Dynamic (recurrent) Neural Networks have, in comparison to static
neural networks, more complex architectures. They provide the possibility
 
Search WWH ::




Custom Search