Biomedical Engineering Reference
In-Depth Information
(e.g., 14 and 93). It is possible from these diagrams to examine the significant time lags for each
neuron in terms of the contribution to the filter output. For instance, in the case of neuron 7 or 93,
the recent bin counts seem to be more correlated with the current output. However, for neuron 23 or
74, the delayed bin counts seem to be more correlated with the current output. The corresponding
reconstruction trajectories for this particular weight matrix are presented in Figure 3.5 . For the two
reaches presented, the Wiener filter does a fair job at reconstructing the general shape of the trajec-
tory but has a difficult time reaching the peaks and reproducing the details of the movement.
3.1.2 Iterative algorithms for least Squares: The Normalized lMS
As is well-known [ 32 ], there are iterative or adaptive algorithms that approximate the Wiener solution
sample by sample. The most widely known family of algorithms is based on gradient descent, where
the weights are corrected at each sample proportionally to the negative direction of the gradient.
w n
(
+
1
)
=
w n
( )
− ∇
J n
( )
(3.16)
η
where η is called the stepsize. The most famous is without a doubt the LMS algorithm, which ap-
proximates locally the gradient of the cost function yielding
w n
(
+
1
)
=
w n
( )
+
e n x n
( ) ( )
(3.17)
η
One of the amazing things about this procedure is that the weight update is local to the
weights and requires only two multiplications per weight. It is therefore very easy to program and
cuts the complexity of the calculations to O ( N ). It is therefore very appropriate for DSP hardware
implementations.
The price paid by this simplifying alternative is the need to set an extra parameter, the step-
size that controls both the speed of adaptation and the misadjustement, which is defined as the
normalized excess MSE (i.e., an added penalty to the minimum MSE obtained with the optimal
LS solution). The stepsize is upper bounded by the inverse of the largest eigenvalue of the input
autocorrelation matrix for convergence. Practically, the stepsize can be estimated as 10% the inverse
of the trace of the input autocorrelation matrix [ 33 ]. The steepest descent algorithms with a single
stepsize possess an intrinsic compromise between speed of adaptation and misadjustment captured
by the relations:
= η
2
M
Tr R
(
)
(3.18)
1
(3.19)
τ
=
2
ηλ
min
 
Search WWH ::




Custom Search