Digital Signal Processing Reference
In-Depth Information
10 1
10 0
(2)
(1)
(3)
10 −1
(0)
10 −2
0
50
100
150
200
250
300
350
400
450
500
Iteration number
2
Fro ) averaging 100
independent runs for the Oja's algorithm (1) and the smoothed Oja's algorithm with
Figure 4.2 Learning curves of the mean square error E (kP(k)P k
1 (2) and a¼
0.3 (3) compared with mTr(C P ) (0) in the same configuration [C x , W(0)]
as Figure 4.1.
defined in [25]. Note that when x ( k ) ¼ ( x k , x k 1 , ... , x knþ 1 ) T with x k being an
ARMA stationary process, the covariance of the field (4.74) and thus l i,j can be
expressed in closed form with the help of a finite sum [23].
The domain of learning rate m for which the previously described asymptotic
approach is valid and the performance criteria for which no analytical results could
be derived from our first-order analysis, such as the speed of convergence and the
deviation from orthonormality d 2 ( m ) ¼
2
Fro can be derived from
numerical experiments only. In order to compare Oja's algorithm and the smoothed
Oja's algorithm, the associated parameters m and ( a , m ) must be constrained to give
the same value of m Tr( C P ). In these conditions, it has been shown in [25] by numerical
simulations that the smoothed Oja's algorithm provides faster convergence and a
smaller deviation from orthonormality d 2 ( m ) than Oja's algorithm. More precisely,
it has been shown that d 2 ( m ) / m 2 [resp. /m 4 ] for Oja's [resp. the smoothed
Oja's] algorithm. This result agrees with the presentation of Oja's algorithm given
in Subsection 4.5.1 in which the term O ( m k ) was omitted from the orthonormalization
of the columns of W ( k ).
Finally, using the theorem of continuity (e.g. [59, Th. 6.2a]), note that the behavior
of any differentiable function of P ( k ) can be obtained. For example, in DOA tracking
def
kW T ( k ) W ( k ) I r k
 
Search WWH ::




Custom Search