Digital Signal Processing Reference
In-Depth Information
def
def
r u i ( w i xx T w i ) ¼ 2 r u i ( w i ) x
x T w i , i ¼ 1, ... , r . This rather intuitive computational process was confirmed by simu-
lation results [61]. Later a formal analysis of the convergence and performance was
performed in [23] where it has been proved that the stationary points of the associated
ODE are globally asymptotically stable (see Subsection 4.7.1) and that the stochastic
algorithm (4.49) converges almost surely to these points for stationary data x ( k ) when
[ u i ,1 , ... , u i , ni ] T
with
u i ¼
and f i ( u 1 , ... , u i , x ) ¼
m k is decreasing with lim k!1 m k ¼ 0 ¼ and P k m k ¼1 . We note that this algorithm
yields exactly orthonormal r dominant or minor estimated eigenvectors by a simple
change of sign in its stepsize, and requires O ( nr ) operations at each iteration but with-
out accounting for the trigonometric functions.
Alternatively, a stochastic gradient-like algorithm denoted direct adaptive subspace
estimation (DASE) has been proposed in [62] with a direct parametrization of the
eigenvectors by means of their coefficients. Maximization or minimization (4.3) is
performed with the help of a modification of the classic stochastic gradient algorithm
to assure an approximate unit norm of the first estimated eigenvector w 1 ( k ) [in fact a
rewriting of Oja's neuron (4.23)]. Then, a modification of the classical stochastic gra-
dient algorithm using a deflation procedure, inspired by the constraint W T W ¼ I r
gives the estimates [ w i ( k )] 2, ... , r
w 1 ( k þ 1) ¼ w 1 ( k ) + m k { x ( k ) x T ( k ) [ w 1 ( k ) x ( k ) x T ( k ) w 1 ( k )] I n } w 1 ( k )
( x ( k ) x T ( k ) [ w i ( k ) x ( k ) x T ( k ) w i ( k )] :
w i ( k þ 1) ¼ w i ( k ) + m k
!) w i ( k )
I n X
i 1
w j ( k ) w j ( k )
for i ¼ 2, ... , r:
(4 : 50)
1
This totally empirical procedure has been studied in [63]. It has been proved that the
stationary points of the associated ODE are all eigenvector bases { +u i 1 , ... , +u i r }.
Using the eigenvalues of the derivative of the mean field (see Subsection 4.7.1), it
is shown that all these eigenvector bases are unstable except f+u 1 g for r ¼ 1 associ-
ated with the sign þ [where algorithm (4.50) is Oja's neuron (4.23)]. But a closer,
examination of these eigenvalues that are all real-valued, shows that for only the
eigenbasis { +u 1 , ... , +u r } and { +u n , ... , +u nrþ 1 } associated with the sign þ
and 2 respectively, all the eigenvalues of the derivative of the mean field are strictly
negative except for the eigenvalues associated with variations of the eigenvectors
{ +u 1 , ... , +u r } and { +u n , ... , +u nrþ 1 } in their directions. Consequently, it is
claimed in [63] that if the norm of each estimated eigenvector is set to one at each
iteration, the stability of the algorithm is ensured. The simulations presented in [62]
confirm this intuition.
4.6.2 Eigenvector Power-based Methods
Note that similarly to the subspace criterion (4.7), the maximization or minimization
of the weighted subspace criterion (4.8) J ( W ) ¼
def Tr[ VW T C ( k ) W ] subject to the
 
Search WWH ::




Custom Search