Digital Signal Processing Reference
In-Depth Information
will be from the true solutions of ( 5.43 ). It is easy to see from ( 5.55 ) that given
the fact that
1, the effect of the initialization is forgotten as time progresses,
so the solutions obtained by the algorithm will be close to the optimal ones in
( 5.43 ). It can be shown that the use of the initialization procedure mentioned
above is equivalent to solving the following regularized LS problem [ 12 ]:
λ<
2
n
n
i
2
n
+
1
w
(
n
) =
arg min
w
0 λ
|
e
(
i
) |
+ δλ
w
.
(5.56)
R
L
i
=
is included in
the cost function. That it, there is a limitation on how large this solution can be.
This is an alternative way to see how the initialization avoids the possibility of
singular A n .
6. The parameters
We see that a limitation on the norm of the sought solution w
(
n
)
λ
and
δ
have a large influence on the behavior of the algorithm.
2
x [ 13 ]. These
guidelines do not come from a rigorous mathematical analysis of the algorithm,
but from experience on the application of the algorithm for a wide class of real
problems.
7. In Sect. 4.2.2 we showed that the NLMS can be obtained as an approximation
to an NR algorithm. Then, in Sect. 4.6.3 we extended this idea to the APA. If
we consider the cost function for the exponentially weighted RLS ( 5.45 ), the
resulting NR recursion with
Typically,
λ
is chosen in the range
[
0
.
98
,
0
.
995
]
and
δ
0
.
01
σ
μ =
1 and
δ =
0is:
A 1
n
w
(
n
) =
b n ,
(5.57)
where A n and b n , given by ( 5.47 ) and ( 5.48 ), are estimators of the correlation
matrix and cross correlation vector, respectively. In this way, the RLS can also
be linked to a stochastic gradient approximation.
Notice that the sequence of solutions given by the RLS algorithm in ( 5.54 )isvery
similar in structure to the one of the LMS algorithm from Chap. 4 . However, the
computational complexity and convergence characteristics of both algorithms are
substantially different.
5.6 Convergence and Computational Complexity Issues
As in the case of stochastic gradient adaptive algorithms, it is important to study the
problem of the convergence of the RLS algorithm. The reader interested in more in
depth discussion about this topic can see [ 14 - 17 ] and the references therein. In this
section, we will only provide a short discussion of the main points of the conver-
gence behavior of the RLS algorithm. As a recursive implementation of successive
LS problems, the RLS shares various of the LS statistical properties. In a system
identification context and assuming the input-output pairs are related by ( 5.2 ), where
Search WWH ::




Custom Search