Digital Signal Processing Reference
In-Depth Information
Nevertheless, the simplification in computations comes at the expense of slower
convergence. Actually, as it will be seen later in the simulation results on adaptive
equalization, the shape of the learning curve is quite peculiar. The initial convergence
of the MSE is very slow and later there is a quick drop. To understand this effect, the
SEA can be put as
μ
w
(
n
) =
w
(
n
1
) +
x
(
n
)
e
(
n
).
|
e
(
n
) |
This can be interpreted as an LMS with a variable step size
.How-
ever, this step size grows as the algorithm approaches convergence, so to guarantee
stability, a very small
μ(
n
) = μ/ |
e
(
n
) |
should be used. On the other hand, at the beginning of the
adaptation process the error will be large, so
μ
will be very small, and the algo-
rithm will be very slow. After the error decreases sufficiently, the step size grows
enough to speed up convergence.
There is another important feature of the SEA. Consider the linear regression
model (2.11) . Assume that the power of the noise is not too large and an adaptive
filter is close to its steady state so the error is small. Also, assume that at time n this
error is (slightly) positive. In this case, the updates of the LMS and SEA will be very
similar (with step sizes properly chosen). However, if a very large (positive) sample
of noise v
μ(
n
)
would have appeared (e.g., impulsive noise or double talk situation in
echo cancelation), the outcome would have been very different. The update of the
SEA would have been exactly the same, since it relies only on the (unchanged) sign
of the error, so the filter estimate would have not been perturbed. In contrast, the
LMS will suffer a large change due to the resulting large error sample. The robust
performance against perturbations is a distinctive feature of the SEA. In Sect. 6.4 we
will elaborate more on this important feature.
(
n
)
4.4.2 Sign Data Algorithm
With the same motivation of reducing the computational cost of the LMS, the sign
function can be applied to the regressor. The resulting Sign Data algorithm (SDA),
also called Sign Regressor algorithm, has the recursion
w
(
n
) =
w
(
n
1
) + μ
sign [ x
(
n
)
] e
(
n
),
w
(
1
),
(4.42)
where the sign function is applied to each element in the regressor. It should be clear
that the computational complexity of SDA is the same as the one of the SEA.
It should also be noticed that there are only 2 L possible directions for the update
as the result of quantizing the regressor. On average, the update might not be a good
approximation to the SD direction. This will have an impact on the stability of the
algorithm. Although the SDA is stable for Gaussian inputs, it would be easy to find
certain inputs that result in convergence for the LMS but not for the SDA [ 13 ].
Search WWH ::




Custom Search