Information Technology Reference
In-Depth Information
Multilayer perceptron networks
w 1 11
w 2 11
y 1
x 1
Input
neuron-1
Hidden
neuron-1
Output
neuron-1
y 2
Input
neuron-2
Hidden
neuron-2
Output
neuron-2
x 2
:
:
:
:
:
:
:
:
:
:
y m
Input
neuron- n
Hidden
neuron- h
Output
neuron- m
x n
w 1 nh
w 2 hm
Inputs
outputs
Input Layer
Output Layer
Hidden Layer
Figure 3.4 Multilayer perceptron architecture
Rumelhart and McClelland (1986, MIT book) suggested for multilayer neural
networks the backpropagation learning rule . This has also widely been accepted.
Later, various accelerated versions of the rule have been elaborated that speed up
the learning process. In the meantime, the multilayer perceptron networks trained
to learn using backpropagation algorithm are simply called backpropagation
networks .
The learning capability of backpropagation networks is mainly due to the
internal mapping of the characteristic signal features in the process of network
training onto the hidden layer. The mappings stored in this layer during the training
phase of the network can be automatically retrieved during it's application phase
for further processing. Although the features-capturing capability of the network
can be extended enormously when a second hidden layer is added, the additional
training and computational time required in this case, however, advises the
network user not to do this, if it is not absolutely required by the complexity of the
problem to be solved.
Training of backpropagation networks (without internal feedback) is a process
of supervised learning , relying on the error-correction learning method in which
the desired, i.e. a given, output pattern is expected to be matched by the final
output pattern of the network within a specified accuracy. This is to be achieved by
adjusting the network weights according to a parameter tuning algorithm,
traditionally performed by a backpropagation algorithm that is considered as a
generalization of the delta rule.
3.3.2 Radial Basis Function Networks
The idea of function approximation using localized basis functions is the result of
the research work done by Bashkirov et al. (1964) and by Aizerman, Braverman
and Rozenoer (1964) on the potential function approach to pattern recognition.
Moody and Darken (1989) used this idea to implement a fast learning neural
network structure with locally tuned processing units. Similarly, Broomhead and
Lowe (1988) have described an approach to local functional approximation based
on adaptive function interpolation. This has found a remarkable resonance within
the researchers working on function approximation using radial basis functions ,
Search WWH ::




Custom Search