Digital Signal Processing Reference
In-Depth Information
11.3
Kohonen's Self-Organizing Map
Let d
denote a generic distance measure between x and w i . The index
of the best matching neuron to input pattern x is given by
(
x , w i )
c
(
x
) =
arg min
i
{
d
(
x , w i ) }
,
i
=
1 , 2 ,
...
,N
.
(11.14)
The neuron that satisfies Equation 11.14 is the so-called winner for the input
pattern x .Byusing Equation 11.14, the continuous space of input patterns
is mapped into a set of discrete neurons. Depending on the application, the
response of the network could be either the index of the winner neuron or
the vector of the synaptic weights of the winner neuron. Figure 11.3a depicts
the topology of the NN under study.
Our aim is to determine w i so that an ordered mapping is obtained, one
that describes the distribution of patterns x . One possible way is to determine
w i according to the principles of vector quantization . 12
be the joint
probability density function (pdf) of input patterns and w c the closest weight
vector to x with respect to Equation 11.14. The elements of w c are chosen so
that the expected average quantization error
Let f
(
x
)
ε =
g
(
d
(
x , w c ))
f
(
x
)
d x
(11.15)
X
is minimized, 6
where g
( · )
is a monotonically increasing function of the dis-
tance d
. One can observe that the index c is also a function of x as well
as of all w i . Accordingly, the integrand in Equation 11.15 is not continuously
differentiable. The minimization of Equation 11.15 is, in general, difficult and
does not guarantee that all neurons are indexed with one particular way, i.e.,
that an ordered mapping results. However, if
(
x , w c )
is modified so that the quan-
tization error is smoothed locally by a kernel h ci , which is a function of the
distance between the neurons c and i , the minimization of the new criterion
ε
ε =
h ci g
(
d
(
x , w i ))
f
(
x
)
d x
(11.16)
i
results in ordered weight vectors. The minimization of Equation 11.16 could
be done by using stochastic approximation methods, such as the Robbins-
Monro algorithm. 13 Let x
=
x
(
n
)
be the input pattern at discrete time instant n .
Let w i
be the approximation of w i at the same time instant. Let us consider
the sample function
(
n
)
ε (
defined by 7
n
)
ε (
n
) =
h ci g
(
d
(
x
(
n
)
, w i
(
n
))).
(11.17)
i
The Robbins-Monro algorithm for the minimization of Equation 11.17 yields
) ∂ε (
1
2 β(
n
)
w i
(
n
+
1
) =
w i
(
n
)
n
,
(11.18)
w i
(
n
)
Search WWH ::




Custom Search