Digital Signal Processing Reference
In-Depth Information
w T is the true system to be identified and the noise sequence v
(
n
)
is i.i.d., zero-mean
and independent of the inputs x
(
n
)
, some properties follow:
Assuming some form of persistence excitation on the input and in the absence of
noise v
, the RLS is exponentially stable [ 15 ], [ 17 ]. That is, it converges in an
exponential manner to the true system w T .
(
n
)
In the presence of noise and when
1, if we do not take into account the
slight modifications introduced by the initialization, the estimate w
λ =
(
n
)
is unbi-
ased ( E [ w
(
n
)
]
=
w T ) when n
L . When 0
<λ<
1, the estimate w
(
n
)
is
asymptotically unbiased with an exponential behavior.
Assuming that the process x
is ergodic in its second moment, the EMSE of the
RLS algorithm can be put as [ 18 ]:
(
n
)
v
σ
(
λ)
1
L
ξ =
(5.58)
2
(
1
λ)
L
The convergence speed of the RLS algorithm is an order of magnitude (or even
more) faster than the convergence speed of the LMS algorithm and it is insensitive
to the condition number
. This makes the RLS specially well-suited to deal
with problems were the input signals are highly correlated. The reason for the RLS
robustness with respect to the input signal color comes from the fact that, when n
is sufficiently, A n is a very good estimator of
χ (
R x )
1
R x . Using this, and the easy to
1
λ
A 1
n
prove property that k
(
n
) =
x
(
n
)
, we can write:
d
A 1
n
x T
w
(
n
) =
w
(
n
1
) +
x
(
n
)
(
n
)
(
n
)
w
(
n
1
)
(5.59)
d
R 1
x T
w
(
n
1
) + (
1
λ)
x
(
n
)
(
n
)
(
n
)
w
(
n
1
)
.
(5.60)
x
R 1 / 2
R 1 / 2
Defining c
(
n
)
w
(
n
)
and
x
˜
(
n
)
x
(
n
)
we can write:
x
x
d
x T
c
(
n
)
c
(
n
1
) + (
1
λ) ˜
x
(
n
)
(
n
) − ˜
(
n
)
c
(
n
1
)
.
(5.61)
In this way, the RLS can be viewed as an LMS algorithm with
μ =
1
λ
and a
filtered by R 1 / 2
modified input, which is the original input x
(
n
)
. This operation
x
has a whitening effect, i.e.,
has a correlation matrix equal to I L . This situation
of uncorrelated input is clearly the most beneficial for the LMS algorithm in terms
of speed of convergence. In Sect. 3.5 we mentioned that the NR method can
be interpreted as an SD algorithm with a pre-whitened input. In fact, we have
seen that NLMS, APA and RLS can be interpreted as approximations to the NR
algorithm. The NLMS can be derived with an instantaneous cost function or using
the estimators ( 4.2 ) . The APA can be associated to a cost function where the last
K errors (based on the last K input regressors and observations) are averaged or to
the NR method from the MSE cost function using the estimators ( 4.162 ) . The use
of more input-output pairs make the cost function closer to the actual MSE and
x
˜
(
n
)
 
Search WWH ::




Custom Search