Information Technology Reference
In-Depth Information
SOINN [38] (Self-Organizing Incremental Neuronal Network). Other alternatives are
the methods based on heuristics, such as k-means [39] or PAM [16]. These two meth-
ods define a series of initial clusters which correspond to a new individual or to exist-
ing individuals, and are marked as the cluster representatives, while the remaining
individuals are allocated to the nearest cluster. The problem with each of these meth-
ods is that they do not consider changes in the distribution of densities of individuals,
and usually do not detect clusters with atypical forms, such as elongated clusters.
4.2.1 ESOINN Neural Network
Neural Networks based on GNG, allow detecting clusters with atypical forms, adjust-
ing iteratively to the distribution of the individuals, and detecting low density zones.
There are variants of the GNG, such as the GCS [40] (Growing Cell Structure) or
SOINN [38] (Self-Organizing Incremental Neuronal Network). Unlike self-
organizing maps based on meshes, Growing Grid or GCS do not set the number of
neurons, or the degree of connectivity, but they do establish the dimensionality of
each mesh. This complicates the separation phase between groups once the neurons
are distributed evenly across the surface. The ESOINN neural network [15] (En-
hanced Self-Organizing Incremental Neuronal Network) is a variation of the SOINN
neural network [38], which allows the creation of a single layer, while ESOINN is
able to incorporate both the distribution process along the surface and the separation
between low density groups. The learning process of the network is distributed into
two stages: the first stage of competition CHL [40] (Competitive Hebbian Learning)
where the closest node to the input pattern is selected; and the second adapta-
tion/growing stage similar to a GNG. The training phase and the various algorithms
applied at every modified stage are outlined below:
1. Update the weights of neurons by following a process similar to the SOINN, but
introducing a new definition for the learning rate in order to provide greater sta-
bility for the model. This learning rate has produced good results in other net-
works such as SOM [42].
Δ
W
=
n
(
M
)(
ξ
W
)
a
1
a
a
1
1
1
(6)
Δ
W
=
n
(
M
)(
ξ
W
)
a
i N
with
a
2
a
a
a
i
1
i
1
1
n
(
x
)
=
n
(
x
)
=
a is neuron i,
ξ
Where
,
,
is the input pattern,
1
2
x
2
2
+
x
N is the set of neighbours of a .
2. Delete the connections with higher age. The ages are standardized and those
whose values are in the region of rejection with k>0 are removed. The assigned
value of
M is the number of winnings of neuron
a ,
α
is 0.05, therefore
2
e
μ
1
z
z
=
i
z
N
(
0
f
(
z
)
=
Exp
,
then
(7)
i
σ
2
2
π
P
(
z
<
k
)
=
α
/
2
P
(
z
<
k
)
=
0
975
Θ
(
z
)
=
0
975
Where
Æ
Æ
k=1.96
Therefore all z values that are greater than 1.96 are deleted
 
Search WWH ::




Custom Search