Digital Signal Processing Reference
In-Depth Information
a K 1 T are the autoregressive process coefficients and where
the L dimensional vector
= a 1 a 2 ...
where a
T contains the
samples of the white noise exciting the autoregressive process. The least squares
(LS) estimate (see next chapter) of a at time n is 21 :
v
˜
(
n
) =[˜
v
(
n
) ˜
v
(
n
1
)... ˜
v
(
n
L
+
1
) ]
X T
1
X
) X
X T
a
ˆ
(
n
) =
(
n
)
x
(
n
) =
(
n
(
n
)
(
n
)
x
(
n
).
(4.161)
Then, the direction of the update of the APA can be written as p
(
n
) =
x
(
n
)
X
(
n
) ˆ
a
(
n
)
.From( 4.160 ), this means that p
(
n
)
is an estimate of the white noise
v
˜
(
n
)
.
Hence, instead of using the correlated signal x
(
n
)
to do the update, the APA uses
the uncorrelated one p
. Therefore, an improvement on the speed of convergence
can be expected. This reflects the most important motivation for using the APA: to
obtain a fast convergent algorithm when the signals involved in our problem are
highly correlated.
If
(
n
)
is different fromone, it can be associated to the error signal, so the conclusions
made above can still hold (the error signal is passed through a filter with a single
coefficient equal to
μ
μ
). To read more on filtered error algorithms, see [ 55 ].
Finally, we can also find the APA as an approximation to an NR algorithm in the
same way we did for the NLMS. However, in this case, instead of using the instan-
taneous estimates given by ( 4.2 ) to replace the statistics used in the NR algorithm,
we use the following:
n
n
1
K
1
K
R x =
x T
x
(
i
)
(
i
)
and
r x d =
ˆ
d
(
i
)
x
(
i
).
(4.162)
i
=
n
K
+
1
i
=
n
K
+
1
Theuseofthelast K input regressors into these estimates will improve the quality of
the estimators. The higher the value of K , the closer the estimators would get to the
actual statistics. This would make the algorithm closer to the NR operation mode, so
we should expect an improvement in the speed of convergence.
4.7 Simulations Results
In this section we provide some simulations to show some of the results of the
convergence analysis introduced in this chapter. Regarding the stability results,
the reader is referred to [ 37 ], where extensive simulation analysis was conducted
on the stability of the LMS and SDA.
In our simulations we consider a system identification scenario with the linear
regression model (2.11) . The input sequence x
(
n
)
was generated with a Gaussian
AR1 process with a pole in a , i.e.,
21 Notice the similarity of this with a linear prediction problem from Sect. 2.5 ! !
 
Search WWH ::




Custom Search