Information Technology Reference
In-Depth Information
Static neural networks
A distinctive feature of static neural networks is the feed-forward type of
signal transition that is directed from inputs towards outputs. There is no
recurrence of the signal, or time-dependent elaboration of the data
characteristic of dynamic neural networks (DNN). Some of the static
neural networks frequently used are the multilayered perceptron (MLP)
neural network, the generalized regression neural network (GRNN), and
the radial basis function neural network (RBFNN), etc.
The Multilayered Perceptron (MLP) Neural Network is one of the
simplest ANNs and consists of an input layer, output layer, and one or
more hidden layers of neurons that apply sigmoid functions to input(s).
The MLP was fi rst introduced by Frank Rosenblatt in 1958. MLP neural
networks are usually the fi rst choice for analysis of the studied problem
using ANNs. They are also referred to as the universal function
approximator (Cybenko, 1989). Optimization of the weights in the MLP
network is made by backward propagation of the error during the
training phase. For an MLP neural network, the relationship between the
inputs and outputs can be represented as:
[5.10]
where x i and y i are networks primary input and output; w ih and w ho
( i = 1, 2, . . ., N i ; o = 1, 2, . . ., N o ) are the weights of the connections
between the input and hidden units, and between the hidden and the
output units, respectively, b h and b o are biases of hidden units and output
units, and f h (·) and f o (·) are hidden and output functions, respectively
(Zhang and Man, 1998).
Bias is a neuron that can be added to the layer and always has a
constant value, which can be adjusted during the training process. If we
think of the network output values as curve points, then the bias allows
us to shift the curve and have an arbitrary intercept (e.g. if there was no
bias, the curve would always go through the origin).
Support Vector Machines (SVM) are models similar to MLP networks,
such that an SVM model using a sigmoid kernel function is equivalent
to a two-layer, perceptron neural network. The most relevant description
of this technique is provided elsewhere (Vapnik, 1995). In contrast
to MLPs that are mainly used for modeling/regression purposes,
SVMs are binary classifi ers. During the training process, SVM builds
a model that is able to classify any given sample in one of the two
￿
￿
￿
 
Search WWH ::




Custom Search