Biomedical Engineering Reference
In-Depth Information
where λ is the Lagrange multiplier and d ( n ) is the desired response. Solving this optimization prob-
lem with Lagrange multipliers, the weight update becomes
1
w n
(
+ = +
1
)
w n
( )
x n e n
( ) ( )
(3.21)
2
x n
( )
Practically, in our BMI linear model, ( 3.21 ) becomes
η
w
c
(
n
+
1
)
=
w
c
( )
n
+
e n
c
( ) ( )
x
n
(3.22)
NLMS
NLMS
2
γ
+
x
( )
n
where η satisfies 0<η<2, and γ is a small positive constant. e c ( n ) is an error sample for the c coordi-
nate, and x ( n ) is an input vector. If we let η( n ) ≡ η/(γ+|| x ( n )|| 2 ), then the NLMS algorithm can be
viewed as the LMS algorithm with a time-varying learning rate such that,
w
c
(
n
+
1
)
=
w
c
( )
n
+
( )
n e n
c
( ) ( )
x
n
η
(3.23)
NLMS
NLMS
Although the weights in NLMS converge in the mean to the Wiener filter solution for
stationary data, they may differ for nonstationary data. Therefore, this algorithm may, in fact, have
better tracking performance when compared with the Wiener filter applied to segments of the data.
Therefore, one should compare its performance with the Wiener filter for BMIs. For the same data
set presented in Section 3.1.1 , a 10-tap linear filter was trained with LMS (stepsize = 0.0001) using
a data set of 10000 pts. To investigate the effect of an online update, the absolute value of the update
for each weight was saved at each time t . As shown in Figure 3.6 , histogram bins are placed in the
range of 0 to 0.01, with centers placed every 0.001. Each coordinate direction for the 3D reaching
task has 1040 weights associated with it. Each color in the histogram represents one of the 1040
weights. For example, dark blue represents weight 1, and it was updated 10 000 times, most of which
had a value of zero. Any occurrences of an update greater than 0.01 is placed in the 0.01 bin. Only
15% of the total number of weight updates had a value other than zero. For this particular stepsize,
this result indicates that one may be able to reduce the number of calculations because many of the
weights are not contributing significantly to the mapping. The experiment was also repeated with
the normalized LMS algorithm (time-varying stepsize), and the same trend can be observed.
Once the model was trained it was also tested for the trajectories presented in Section 3.1.1 ,
as shown in Figure 3.6 . For this BMI application, the performance between the Wiener filter and
the filter trained using LMS were essentially the same. The performance was quantified by using
the correlation coefficient (CC) between the model output and the true trajectory. Here, the Wiener
filter had a CC of 0.76 ± 0.19) while the LMS generated a CC of 0.75 ± 0.20.
 
Search WWH ::




Custom Search