Digital Signal Processing Reference
In-Depth Information
THEOREM 1
25
Let
B
T
,
Q
be the modal matrix that
has as columns the eigenvectors that correspond to the eigenvalues
µ
i
be the eigenvalues of the matrix
(
B
+
)
µ
i
, and
M
=
µ
µ
...
µ
diag[
1
,
2
,
,
Np
]
.Wedefine also the following terms:
e
ii
[
Q
T
C
[
Q
T
DQ
]
ii
,
=
(
0
)
Q
]
ii
q
ii
=
(11.38)
where
[
]
ii
denotes the i i -element of the matrix inside brackets. The following state-
ments hold:
·
(i)
exp
i
t
1
2
min
Y
(
t
)
≤
−
µ
0
α(ζ)
d
ζ
.
(11.39)
i
(ii) If the adaptation step is constant,
α(
t
)
=
α
, then
q
ii
µ
e
ii
exp
q
ii
µ
J
(
t
)
=
α
i
+
−
α
{−
µ
α
t
}
.
(11.40)
i
i
i
is positive definite and
0
(
+
B
T
)
α(ζ)
ζ
=∞
(iii) If
B
d
, then
lim
t
q
ii
µ
lim
t
J
(
t
)
=
→∞
α(
t
).
(11.41)
→∞
i
i
One can easily verify that, for a constant adaptation step, Equation 11.40 is
consistent with Equation 11.36. For
0.
Kosmatopoulos and Christodoulou
27
derived another proof of convergence.
They applied a time-coordinate transformation similar to that analyzed in
Reference 26. For a neighborhood function chosen to be Kronecker delta,
they transformed Equation 11.20 into a linear time-varying stochastic differ-
ence equation and applied Lyapunov stochastic stability arguments.
α(
t
)
=
1
/
t
,wehave lim
t
→∞
J
(
t
)
=
11.5
Self-Organizing Map Properties
When the training algorithm has led to convergence, the feature map com-
puted by the algorithm depicts important statistical characteristics of the
space of input patterns. We have already said that the map computed by
the neural network is essentially a nonlinear transformation
that maps the
input space
X
into the output space
A
,
:
X → A
.From this point of view,
we have the following.
Search WWH ::
Custom Search