Information Technology Reference
In-Depth Information
P12 (red)
0.06
0.05
0.04
0.03
0.02
0.01
0
0.01
0.02
0.03
0.04
0
200
400
600
800
1000
iterations
Figure 3.17 Second example for MSA EXIN: P 12 .
MSA EXIN and MSA LUO have a common learning rate α( t ) = const = 0 . 001 .
The initial conditions are w 1 ( 0 ) = e 1 and w 2 ( 0 ) = e 2 . MSA EXIN yields, after
1000 iterations, the following estimations:
)
1 ( 1000 ) =
φ
(
1000
1
λ 1 =
4433 ] T
.
w 1 (
) =
[ 0
.
.
.
.
1
0028
1000
9070, 0
0616, 0
0305, 0
φ 2 (
)
2 ( 1000 ) = 1 . 0032
1000
λ 2 =
w 2 ( 1000 ) = [0 . 0592, 0 . 6889, 0 . 7393, 0 . 0797] T
MSA LUO yields, after 1000 iterations, the following eigenvalues:
)
1 ( 1000 ) =
φ
(
1000
)
2 ( 1000 ) =
φ
(
1000
λ 1 =
1
λ 2 =
2
.
.
1
0011
and
1
0088
Figures 3.17 to 3.21 show some results for both neural networks. Their perfor-
mances with respect to the nondesired eigenvectors are similar. More interesting
is the behavior in the minor subspace. As is evident in Figure 3.16 for MSA
EXIN and Figure 3.19 for MSA LUO, MSA EXIN better estimates the minor
components and for a longer time. Hence, it can easily be stopped.
3.3.3 Principal Components and Subspace Analysis
This subject is very important, but outside the scope of this topic, except from the
point of view of a straight extension of the MCA and MSA EXIN learning laws.
Search WWH ::




Custom Search