Digital Signal Processing Reference
In-Depth Information
is max-
imized with respect to I j . The width of the bubble depends on the ratio of
excitatory to inhibitory lateral connections. If the positive feedback is strength-
ened, the bubble becomes wider. If the negative feedback is amplified, the
bubble becomes thinner.
If we consider the piecewise-linear function
The bubble is centered at the point where the initial response y j
(
0
)
θ( · )
a
if x
a
θ(
x
) =
x
if 0
x
<
a
(11.11)
0
otherwise,
then the j th neuron output can be written as
a
if neuron j is inside the bubble
y j
=
(11.12)
0
otherwise.
Accordingly, we can exploit the bubble formation as a computational short-
cut that emulates the effect of lateral feedback connections. Therefore, it is
sufficient to introduce the topological neighborhood of active neurons that
corresponds to the activity bubble. Henceforth, in Kohonen's model we are
interested in:
1. Finding the winner neuron
2. Defining the topological neighborhood of the winner
3. Updating the weights of the neurons that fall inside this neighbor-
hood
The winner neuron is the one that corresponds to the maximum stimulus
I j
=
w j x , where w j
= (w
w
...
w
)
T , j
=
...
j 1 ,
j 2 ,
,
1 , 2 ,
,N ,isthe weight
jp
T is the input pattern. If we
assume that all weight vectors are normalized to a constant norm, then the
neuron that maximizes the inner product is simply the neuron that minimizes
the Euclidean distance between the weight vectors and the input pattern,
d
vector of the j th neuron and x
= (
x 1 ,x 2 ,
...
,x p
)
(
w j , x
)
, because
d 2
2
2
2 w j x
2
(
w j , x
) =
w j
x
=
w j
+
x
.
(11.13)
Equation 11.13 indicates that, under the assumptions of a piecewise-linear
function (Equation 11.11) and weight vectors normalized to a constant norm,
the self-organizing model of Kohonen performs like a K -means clustering
algorithm. In the following, we will resort to this simplification.
Search WWH ::




Custom Search