Digital Signal Processing Reference
In-Depth Information
Some remarks are in order:
1. The use of the structure in (
5.42
) and the matrix inversion lemma allow us to
obtain Eq. (
5.54
). This provides the solution to the LS problem (
5.43
) at time
n
in terms of the solution of the LS problem at time
n
1. That is, in order to obtain
the solution at time
n
, where a new input-output pair is available, we can use the
solution at the previous time instant plus a correction term given by
k
−
(
n
)ζ(
n
)
.
We see that this correction term is proportional to the error
ζ(
n
)
. Then, when
the solution
w
(
n
−
1
)
does not provide a “good explanation” of
d
(
n
)
using
x
(
n
)
,
the correction term will be larger.
2. The vector
k
in the correction term is called the
Kalman gain
[
7
]. Its function
is to provide the proper direction for the correction term and the appropriate
scaling of the error
(
n
)
ζ(
n
)
.
λ
−
1
A
−
1
n
3. We see that the term
is crucial for the algorithm operation and
for determining its computational complexity. The obtention of
1
x
(
n
)
−
λ
−
1
A
−
1
n
1
x
(
n
)
−
demands
L
2
+
1 multiplications and
L
(
L
−
1
)
additions. Using this term we can
calculate
k
(
n
)
with
L
extra multiplications and
L
extra additions (for obtain-
A
−
1
λ
−
1
A
−
1
+
λ
−
1
x
T
ing 1
(
n
)
n
−
1
x
(
n
)
), and a division. Using
n
−
1
x
(
n
)
and
k
(
n
)
,the
update of
A
−
1
can then be obtained with
L
multiplications and
L
2
+
L
−
1
n
ζ(
)
additions. With
L
multiplications and
L
additions we obtain
n
. Finally, with
(
)
L
multiplications and
L
additions we get
w
n
. Therefore, we see that the com-
L
2
(
)
plexity of the algorithm per iteration is
O
. This should be compared with
L
3
complexity of the regular LS without exploiting the intrinsic struc-
ture in (
5.42
). Accordingly, very significant savings in computational load can
be obtained when
L
is large!!
4. It should be noticed that the a priori error
the
O
(
)
ζ(
n
)
is equivalent to the error
e
(
n
)
)
should not be confused with the one included in the exponentially weighted cost
function for the RLS defined in (
5.45
). In the latter, for each
i
(
n
=
0
,
1
,...,
n
,the
is computed using the filter estimate at time
n
,sotheyaremorelikea
posteriori errors as in the APA.
5. Special considerations deserves the value of
error
e
(
i
)
and the initialization
A
−
1
−
δ
1
. Notice
that at every time instant we need the value of
A
−
1
n
−
1
, that is, we need to guarantee
that
A
n
−
1
is invertible. At the initial iterations (
n
≤
L
), we can easily see from
(
5.47
) that
A
−
1
n
will not be invertible. The reason is that this matrix will be the
L
matrices of rank one. The use of
A
−
1
−
1
=
δ
−
1
I
L
guarantees that
A
n
will be invertible for every successive time instant. It can be easily shown that
sum of
n
≤
n
n
−
i
x
x
T
n
+
1
A
n
=
0
λ
(
i
)
(
i
)
+
λ
δ
I
L
.
(5.55)
i
=
The reader should notice that with (
5.55
), the obtained values of
w
are not
anymore the solutions of the WLS problem in (
5.43
). It is natural to ask how far
the vectors
w
(
n
)
(
n
)
, obtained in an actual implementation of the RLS algorithm,