Digital Signal Processing Reference
In-Depth Information
iterative “Kalman-like” spirit. Interestingly, this is not particularly difficult
if we rewrite (3.72) as
n
n
1
x T
x T
x T
(
n
) =
λ
(
k
)
x
(
k
)
(
k
) =
λ
(
n
)
x
(
n
)
(
n
) +
λ
(
k
)
x
(
k
)
(
k
)
(3.76)
k
=
1
k
=
1
and (3.73) as
n
n
1
π
(
n
) =
λ
(
k
)
x
(
k
)
d
(
k
) =
λ
(
n
)
x
(
n
)
d
(
n
) +
λ
(
k
)
x
(
k
)
d
(
k
)
(3.77)
k
=
1
k
=
1
If we replace λ
(
k
)
by the forgetting factor, as posed in (3.68), it
follows that
n
n
1
λ n k x
x T
x T
λ n k 1 x
x T
(
n
) =
(
k
)
(
k
) =
(
n
)
(
n
) +
(
k
)
(
k
)
x
λ
k
=
k
=
1
1
x T
=
x
(
n
)
(
n
) +
λ
(
n
1
)
(3.78)
and
n
n
1
λ n k x
λ n k 1 x
π
(
n
) =
(
k
)
d
(
k
) =
x
(
n
)
d
(
n
) +
λ
(
k
)
d
(
k
)
k
=
1
k
=
1
=
x
(
n
)
d
(
n
) +
λ π
(
n
1
)
(3.79)
From now on, as shown in the last equations, the time indices are incorpo-
rated into
to emphasize their characteristic of temporal estimations
that depend on the quantity of available data.
From (3.78), the temporal autocorrelation matrix is obtained recursively,
but it must be inverted to provide the optimal solution. In order to avoid
direct matrix inversion, it is suitable to make use of an elegant mathemat-
ical result known as matrix inversion lemma [128]. Using this lemma, it is
possible to update the inverse of
and
directly by
λ 2
1
x T
) 1
(
n
1
)
x
(
n
)
(
n
(
n
1
)
1
λ 1
1
(
n
) =
(
n
1
)
(3.80)
λ 1 x T
) 1
1
+
(
n
(
n
1
)
x
(
n
)
Such iterative calculation, together with some definitions of auxiliary
variables, leads to the RLS algorithm, depicted in Algorithm 3.1 [139]:
Search WWH ::




Custom Search