Information Technology Reference
In-Depth Information
Fig. 7.12. Evolution of the batch training algorithm for the four Gaussian mixture
example (pictures a and b) for two different topologies: 1-D with 50 neurons et
2-Dwith10 × 10 neurons. Top pictures display the 1-D map after 20, 200, 2,000
iterations. The same experiment was performed for the 2-D map model; the bottom
pictures show the evolution after 50, 500, 5,000 iterations. In the two cases, when
convergence is reached, the map covers the whole support of the observation density
During the first phase, when T is large, the map collapses onto the center
of mass, and topological self-organization appears. Then, as T decreases,
the map is organized in order to minimize the total inertia of the partition
that is associated to the reference vector set. At the end of the algorithm,
some reference vectors are positioned at the heart of the observation cloud.
Others are trapped in void or low-density regions.
A close look at the resulting partition provides an interpretation of the hid-
den structure of observations. Figure 7.13 displays the map. The neurons
that have not captured any observation are shown as black dots. Thus, it
is possible to separate the data set into two distinct clusters: the algorithm
detects natural boundaries.
Search WWH ::




Custom Search