Biomedical Engineering Reference
In-Depth Information
assigned to the constituent nodes, w 1 ij , such that they reflect some physical characteristic
of the external input x p where p
1, 2, ... P , and P is the total number of data points used
for training. The weight vector for each node is equivalent to the cluster center in the fea-
ture space. Every input vector x p is connected to each node in the network lattice or map
by a weight vector, w 1 ij . Initially, the weights are set to small random values located in the
n -dimensional data space. A node located at ( i , j ) in the network has an associated variable
neighborhood operator that covers an area of the map given by (2 NE ij
1)
where NE ij is the decreasing neighborhood radius. The weight adaptation algorithm is
designed such that the selected node, and all nodes residing in its neighborhood are simul-
taneously updated. The size of the neighborhood is decreased as training progresses until
it consists of only the winning unit.
For each randomly selected x p , an error or difference, D p
1)
(2 NE ij
ij , is computed between the
input vector and the weight vectors for all the cluster units in the network
n
w 1 kij ) 2
D ij
( x k
(
ij )
(5.7)
k
1
where x k
is the k th
input, w 1 kij is the weight from the k th
input to the ( i , j ) th
cluster unit,
i
1, 2, … I, and j
1, 2, … J. A count-dependent nondecreasing function,
(
ij ), is used
to prevent cluster under utilization.
During the adaptation or learning process, weights of the cluster unit ( i , j ) nearest to the
input vector x p , called the winning unit or neuron, and all the units residing within the
specified neighborhood, NE ij
, are updated using
( x p
( old ) )
w ij
( new )
w ij
( old )
w ij
(5.8a)
where
NE ij
NE initial
0.001
(5.8b)
µ is a predefined learning rate, and NE initial is the initial neighborhood size in terms of num-
ber of units. The radius of the topological neighborhood, NE ij
, is gradually reduced.
Initially, the nodes are uniformly distributed in the planar lattice. As the training process
progresses, the weights assigned to the nodes get organized in a natural manner and the
cluster centers move to reflect similarity in clusters.
The adaptation of the weights in the second layer used to identify the class occurs only
when the parameters w ij of the SOFM in layer 1 have been determined. The nodes in the
output layer are computed using a LMSs learning algorithm that utilizes error correction.
The training data for this process is a set of input-output pairs ( x p , c m ) where is c m is the
desired class assignment for training vector x p . Once more, the weight adaptation for the
feedforward weights of the i th node in layer 2 is
( w 1 ij
c m )
w m ( new )
w m ( old )
(5.9)
where
is the learning rate which determines the step size for changing the value of the
weight after each iteration, w 1 ij is the nearest cluster to the input vector x p , and m is the
identifier of the output neuron or class.
The SOFM performs relatively well in noisy data because the number of clusters is fixed,
and adaptation stops after training has been completed. However, for datasets with a lim-
ited number of feature vectors the results may depend on the presentation order of the
input data. Another common problem with this type of competitive learning algorithm is
Search WWH ::




Custom Search