Digital Signal Processing Reference
In-Depth Information
temporal or spectral features [5, 55, 56]. Key features for medical image
processing are shape, texture, contours or size and in most cases describe
the region of interest [66, 67].
Backpropagation-type neural networks
MLPs are trained based on the simple idea of the steepest descent
method. The core part of the algorithm forms a recursive procedure
for obtaining a gradient vector in which each element is defined as
the derivative of a cost function (error function) with respect to a
parameter. This learning algorithm, known as the error backpropagation
algorithm, is bidirectional, consisting of a forward and a backward
direction. The learning is accomplished in a supervised mode which
requires the knowledge of the output for any given input. The learning
is accomplished in two steps: the forward direction and the backward
direction. In the forward direction, the output of the network in response
to an input is computed, while in the backward direction, an updating
of the weights is accomplished. The error terms of the output layer are
a function of c t and output of the perceptron ( o 1 ,o 2 ,...,o n ).
The algorithmic description of the backpropagation is given below
[61]:
1. Initialization: Initialize the weights of the perceptron randomly with
numbers between -0.1 and 0.1; that is,
w ij
=random [
0 . 1 , 0 . 1])
0
i
l, 1
j
m
(6.3)
v jk
=random [
0 . 1 , 0 . 1])
0
j
m, 1
k
n
2. Presentation of training patterns: Present p t =[ p t
1
,...,p l ]from
the training pair ( p t , c t ) to the perceptron and apply steps 1, 2, and 3
from the perceptron classification algorithm described above.
3. Forward computation (output layer): Compute the errors δ ok , 1
,p t
2
k
n in the output layer using
o k )( c k
δ ok = o k (1
o k ) ,
(6.4)
where c t =[ c t
1
,...,c t n ] represents the correct class of p t . The vector
( o 1 ,o 2 ,...,o n ) represents the output of the perceptron.
,c t
2
Search WWH ::




Custom Search