Information Technology Reference
In-Depth Information
PCA : EXIN (red), LUO (blue), OJAn (black), OJA (green)
15
10
5
0 0
0.5
1
1.5
2
iterations
× 10 4
Figure 3.22 Evolution of the squared weight moduli of the PCA linear neurons with initial
conditions equal to the solution. ( See insert for color representation of the figure .)
The following simulation deals with the same example of Section 2.6.2.4, but
now the initial condition is given by the first principal component:
w( 0 ) = [0 . 5366, 0 . 6266, 0 . 5652] T
and λ max = 0 . 7521. The learning rate is the same. Figures 3.22 and 3.23 confirm
the previous analysis regarding the PCA divergence. Figure 3.24 shows an esti-
mation of the largest eigenvalue using the PCA neurons with initial conditions
chosen randomly, for every component, in [ 0 . 1, 0 . 1]. PCA EXIN is the fastest,
but has sizable oscillations in the transient.
In [140], a neural network is proposed to find a basis for the principal sub-
space. For the adaptive extraction of the principal components, the stochasting
gradient ascent (SGA) algorithm [139,142], the generalized Hebbian algorithm
(GHA) [166], the LEAP algorithm [16,17], the PSA LUO [the learning law is eq.
(3.21) but with the reversed sign of the weight increment] [125], and the APEX
algorithm [53,111,112] are some of the most important methods. In particular,
for PSA LUO, the convergence is guaranteeed if the moduli of the weight vectors
are greater than 1. A PSA EXIN learning law can be conceived by reversing the
Search WWH ::




Custom Search