Digital Signal Processing Reference
In-Depth Information
We note that these asymptotic biases are similar to those obtained in batch estimation
derived from a Taylor series expansion [77, p. 68] with expression (4.77) of C u .
! u 1 þo
and E[ l ( k )] l 1 ¼ o
:
X
n
1
k
l 1 l i
2( l 1 l i ) 2
1
k
1
k
E[ w ( k )] u 1 ¼
2
Finally, we see that in adaptive and batch estimation, the square of these biases are an
order of magnitude smaller that the variances in O ( m )or O ( k ).
This methodology has been applied to compare the theoretical asymptotic per-
formance of several adaptive algorithms for minor and principal component analysis
in [22, 26, 27]. For example, the asymptotic mean square error E( kW ( k ) W k
2
Fro )of
the estimate W ( k ) given by the WSA algorithm (4.57) is shown in Figure 4.1, where
the stepsize m is chosen to provide the same value for m Tr( C u ). We clearly see in this
figure that the value b 2 / b 1 ¼ 0.6 optimizes the asymptotic mean square error / speed
of convergence tradeoff.
10 1
WSA algorithm
10 0
(1)
(2)
(6)
(5)
(3)
(4)
10 −1
(0)
10 −2
0
500
1000
1500
2000
2500
3000
3500
4000
Iteration number
2
Fro ) averaging 100
independent runs for the WSA algorithm, for different values of parameter b 2 / b 1 ¼ 0.96
(1), 0.9 (2), 0.1 (3), 0.2 (4), 0.4 (5), and 0.6 (6) compared with mTr(C u ) (0) in the case n ¼ 4,
r ¼ 2, C x ¼ Diag(1.75, 1.5, 0.5, 0.25), where the entries of W(0) are chosen randomly
uniformly in [0, 1].
Figure 4.1 Learning curves of the mean square error E(kW(k)W k
 
Search WWH ::




Custom Search