Database Reference
In-Depth Information
8.3.1.1 Kohonen's Self-Organizing Map (SOM)
The self-organizing neural network, also called the topology-preserving map, as-
sumes a topological structure among the cluster units. This property is observed in
the brain but is not found in artificial neural networks. There are m cluster units,
arranged in a 1D or 2D array; the input signals are n -tuples.
The weight vector for a cluster unit serves as an exemplar of the input patterns
associated with that cluster. During the self-organization process, the cluster unit
whose weight vector matches the input pattern most closely (typically, in terms of
the minimum Euclidean distance) is chosen as the winner. The winning unit and
its neighboring units (in terms of the topology of the cluster units) update their
weights. The problem of mistuning initial weights and slow convergence may be
caused by the winner-take-all strategy or if the same learning step is used for all
neighbors. This problem can be solved by applying the Mexican hat structure,
which can excite the cooperative neighbors in close proximity and inhibit the
competitive neighbors that are somewhat further away with a Gaussian-like func-
tion.
8.3.1.2 Applying Conscience to Kohonen's SOM
Inspection of Fig. 8.8 may suggest that the number of clusters generated in Class 1
will either be larger than or at least equal to that in Class 2. Although this seems
reasonable because the number of individuals in Class 1 is larger than that in Class
2, we get an opposite result using Kohonen's normal SOM.
To make an unsupervised neural network utilize the information about the dis-
tribution of patterns, the patterns must be divided into m clusters according to the
occurrences of clusters. Therefore, clusters will be sufficient in a high-density re-
gion, whereas the number of clusters will be reduced in a sparse region.
We apply this conscience [18] to the update of Kohonen's SOM. After selec-
tion of the winner, i.e.,
-
m
l
1
if
X(t)
W
(
t
)
min
X
(
t
)
W
(
t
)
®
j
1
j
y
(8.6)
i
¯
0
otherwise
where ̌ ˽ is the winner, ˫ʻʼ are the current inputs, ˪ ̇
˽ ʻʼ are the
weights (cluster j ), instead of updating the weights as usual, we record the usage
of each cluster, i.e.,
new
old
old
p
p
E
(
y
p
),
j
1
,...,
m
,
(8.7)
j
j
j
j
where ̃ ˽ is the record of usage and E is a step size.
After the statistics are known, selection of the winner is done as
-
m
l
1
if
X(t)
W
(
t
)
b
min
X
(
t
)
W
(
t
)
b
®
j
j
1
l
l
y
(8.8)
i
¯
0
otherwise
Search WWH ::




Custom Search