Digital Signal Processing Reference
In-Depth Information
There is an infinite number of solutions to this problem, but they can be written as
x
x ,
w min =
2 d
+
x
where x is any vector in the orthogonal space spanned by x
(
)
n
. However, given the
particular initial condition w 0 =
(
)
w
n
1
, it is not difficult to show that
I L
w 0 .
xx T
x =
x
2
Putting all together and reincorporating the time index, the final estimate from iter-
ating repeatedly the LMS will be
I L
w
x T
x
(
n
)
(
n
)
x
(
n
)
(
n
1
) +
2 d
(
n
).
(4.29)
2
x
(
n
)
x
(
n
)
Rearranging the terms, ( 4.29 ) is equivalent to
x
(
n
)
w
(
n
1
) +
2 e
(
n
),
x
(
n
)
which is the NLMS update with step size equal to one !!
This interesting interpretation provides more support to the idea of NLMS having
faster convergence than LMS. At each time step, NLMS finds the minimum of ( 4.27 )
in “a single iteration”, in contrast with LMS. Notice that in this case J
(
w min ) =
0,
which is indeed equivalent to nullify the a posteriori output estimation error.
4.2.5 Computational Complexity of NLMS
2 . In general
Compared to the LMS, there is an extra cost in computing
x
(
n
)
this will take L multiplications and L
1 additions. An extra addition is used for
the regularization, and a division is also required. The total cost is then 3 L
1
multiplications and 3 L additions. However, consider the case where the data has
a shift structure (tapped delay line), a very common situation in practice, e.g., in
channel estimation. In this case, two successive regressors, will only differ in two
entries, so
+
2
2
2
2
x
(
n
)
=
x
(
n
1
)
+|
x
(
n
) |
−|
x
(
n
L
) |
.
2 efficiently.
This means that with respect to the LMS, the NLMS requires in this case an extra
computation of 4 multiplications, 2 additions and 1 division.
2
That is, we can reuse the value of
x
(
n
1
)
to compute
x
(
n
)
 
Search WWH ::




Custom Search