Information Technology Reference
In-Depth Information
7.3.4 Discussion
An in-depth analysis of Kohonen's algorithm unravels its salient features.
In the update rule for reference vectors, the gradient step µ t decreases
as the number of iterations increases. When the algorithm starts, µ t is
large, and J som is not guaranteed to decrease. Later, when the gradient
step becomes small enough, the reference vector updates are small for each
iteration. In that situation, Kohonen's SOM algorithm behaves in a way
similar to the dynamic clustering SOM version.
If we assume that K T ( δ ) becomes negligible when distance δ exceeds a
given threshold d T ,then K T ( δ ( c,r )) is significant only for neurons that
belong to a given neighborhood of neuron c , whose size is tuned by d T .
That neighborhood will be denoted as V c ( d T ). Thus, when an observa-
tion z i is taken into account, the reference vector χ ( z i ) will be updated,
together with the reference vectors of all neurons of the neighborhood
V χ ( z i ) ( d T ).
From the point of view of the neuronal representation, the operation of
Kohonen's maps can be understood by taking into account the lateral
connections between neurons of the map: each neuron c is connected to
neighboring neurons r , and any modification of the reference vector w c
generates updates for all reference vectors that are associated to neurons
belonging to V c ( d T ) with intensity K T ( δ ( c,r )), which decreases with in-
creasing distance δ ( c,r ).
If K T ( δ ) is chosen as a threshold function (see Fig. 7.6), it is constant
on the interval [
d T ,d T ] and equal to zero elsewhere, the difference be-
tween Kohonen's SOM and k -means is clear. The weight update is the
same for the two algorithms; however, in Kohonen SOM, not only is the
closest reference vector r updated: the reference vectors associated to neu-
rons of the neighborhood V c ( d T ) are updated as well. Thus, topological
self-organization arises: neurons that are close on the map represent ob-
servations that are close in data space.
When temperature T is small, updates according to relation w c = w t− 1
c
µ t K T ( δ ( c,χ t ( z i )))( w t− 1
z i ) are performed for a subset of all neurons,
and, when d T < 1, Kohonen's SOM algorithm is identical to stochastic k -
means. Actually, in that case, the only neuron to be updated is the winner
of the competition selected by the allocation function χ .
The fact that self-organizing maps are considered as belonging to the family
of neural methods stems from the fact that the neural interpretation allows
a crisp understanding of the training process. In the following section, we
elaborate on that point.
c
7.3.5 Neural Architecture and Topological Maps
The training algorithms that were described in the previous section allow the
determination of the reference vector set W =
{
w c ; c
C
}
of a self-organizing
Search WWH ::




Custom Search