Digital Signal Processing Reference
In-Depth Information
nearly so sparse; hence, the gains are only on the order of a factor of 2 in convergence
time. IPNLMS was designed to work for filters that are both sparse and nonsparse, and
thus it outperforms the other algorithms in this case.
9.2.4 Complexity Reduction Techniques for Sparse
Adaptive Equalizers
There are two benefits to a priori knowledge that the ideal filter is sparse. First, as dis-
cussed in the previous section, knowledge of sparsity can be incorporated into the
adaptation rule to accelerate convergence of the important taps while suppressing
(often advantageously) the adaptation of the less important taps. Second, the adapta-
tion rule itself can be implemented sparsely, saving computations. Adaptation rules
that are themselves sparse are generally referred to as partial-update algorithms. Some-
times they may incur a performance penalty, but if the system truly is sparse, they may
actually improve performance. Whether or not the true system is actually sparse, they
will always reduce the computational load of the equalizer by a fixed, known amount.
Hence, they are very attractive for implementation in adaptive equalizers in low-power
battery-operated devices.
As the name implies, partial-update algorithms only update a subset of M of the equal-
izer coefficients (out of the total of N + 1 coefficients) at each iteration. The distinctions
between different partial-update algorithms are primarily the number of taps updated
per iteration and the method by which the taps to be updated are chosen. Computational
complexity is reduced by roughly a factor of M / N , but the exact amount depends upon
the algorithm that is being partially updated.
One approach is to simply cycle through the taps to be updated at each iteration,
as in sequential or periodic partial-update LMS [43]. For this baseline approach, one
would expect the algorithm to take N / M times longer to converge. More sophisticated
partial-update algorithms attempt to choose the M taps to be updated such that the best
update possible can be formed under this constraint. However, care must be taken to
ensure that the computational burden of the tap selection mechanism does not begin to
balance out the computational savings of the partial update. Max-NLMS [44] chooses a
single tap to update at each iteration, chosen as the tap with largest corresponding input.
M-max-NLMS [45] generalizes this to update the M taps with the M largest inputs. The
rationale is that the taps with the largest inputs will have the largest innovations, since
the updates in NLMS algorithms are proportionate to the tap inputs. Heuristically, the
largest innovations should lead to the largest improvement in the cost function. By
selecting the M taps that yield the best improvement to the cost function, the update
term removes the least useful steps and focuses on the most useful steps. This mitigates
the misadjustment due to adapting the unnecessary taps, which can improve the perfor-
mance of the algorithm, depending on how necessary or unnecessary each tap is. Since
choosing the largest inputs only requires a small number of comparisons (essentially
a subtraction operation each) and no multiplies, this selection criterion has very little
complexity overhead.
All of these partial-update algorithms can be cast into the generic gradient descent
form of
 
Search WWH ::




Custom Search