Digital Signal Processing Reference
In-Depth Information
4.1.2 Computational Complexity of LMS
Usually, when treating the computational complexity of a given numerical procedure,
it is customary to consider only the computational cost of sums and products. These
are by large the major contributors to the computational load of an algorithm and
are almost independent of the digital platform where the algorithm is implemented.
Therefore, they provide a first approximation of the algorithm complexity. A more
precise account for the complexity of an algorithm should include static and dynamic
memory usage and management, the use of floating or fixed point arithmetic, etc. As
these could be platform dependent we do not consider them in the following analysis.
The computational load per iteration of the LMS algorithm can be summarized as
follows:
Complexity of the filtering process: The filtering process basically consists in
calculating the inner product w T
(
n
1
)
x
(
n
)
. It is easy to see that this requires L
multiplications and L
1 additions [ 1 ]. In order to compute e
(
n
)
we need an extra
addition, resulting in a total of L additions.
Complexity of the update calculation: This include the computational load of
obtaining w
(
)
(
)
(
)
(
)
μ
(
)
(
)
n
from w
n
1
, x
n
and e
n
. The cost of computing
x
n
e
n
consist of L
+
1 multiplications. After computing that term, the obtention of w
(
n
)
requires L additions.
Then, the LMS total cost is of 2 L
1 multiplications and 2 L additions. As the total
number of operations is proportional to L , the algorithm is O
+
(
L
)
.
4.1.3 LMS with Complex Signals
In the previous sections we obtained the LMS algorithm assuming that the signals
d
were real. In a great number of applications, the signals involved can
be modeled as complex quantities. For this reason, it is necessary to have a proper
formulation of the LMS (or any adaptive algorithm) for the case of complex signals.
For complex signals d
(
n
)
and x
(
n
)
(
n
)
and x
(
n
)
the correlation matrix and cross correlation vector
can be defined as
E x
E d (
) ,
x H
R x =
(
n
)
(
n
)
,
r x d =
n
)
x
(
n
(4.10)
H denote conjugation and conjugated transposition operations.
The optimal Wiener solution (which would be a complex vector) that minimizes
( · ) and
where
( · )
2 has the same mathematical form as for the real case, but using the expres-
sions in ( 4.10 ). Following the same approach as in Sect. 4.1.1 we obtain that the LMS
for complex signals can be written as
E |
e
(
n
) |
e (
(
) =
(
) + μ
(
)
),
(
).
w
n
w
n
1
x
n
n
w
1
(4.11)
Search WWH ::




Custom Search