Biomedical Engineering Reference
In-Depth Information
the sense of minimum error power) between different time series using a nonparametric approach
(i.e., without requiring a specific model and only mild assumptions for the time series generation).
These advantages have to be counter weighted by the abstract (nonstructural) level of the modeling
and the many difficulties of the method, such as determining what is a reasonable fit, a model order,
and a topology to appropriately represent the relationships among the input and desired response
time series.
3.1 MUlTIVaRIaTE lINEaR ModElS
The first black box model that we will discuss is the linear model, which has been the workhorse
for time series analysis, system identification, and controls [ 2 ]. Perhaps, the first account of a filter-
based BMI [ 25 ] used a linear model trained by least squares with one second (i.e., the current and
NINE past bins) of previous spike data per neuron as a memory buffer to predict a lever press in a
rat model. It turns out that least squares with past inputs can be shown equivalent to the Wiener
solution for a finite impulse response filter. Therefore, the first BMI was trained with a Wiener
filter. The bulk of the results reported in the literature use this very simple but powerful linear solu-
tion [ 8 , 10 , 26 , 27 ]. Since for our BMI application the filter takes a slightly different form because
of the multidimensional input and output, it will be presented more carefully than the others. Let
us assume that an M -dimensional multiple time series is generated by a stationary stable vector
autoregressive (VAR) model
x n
( )
= +
b w
x n
(
- + +
1
)
...
w
x n
(
- +
L
)
u n
( )
(3.1)
1
L
] T
where b is a vector of intercept terms
= b b and the matrices w i are coefficient matrices of
size M × M and u ( n ) is white noise with nonsingular covariance matrix S . We further assume that
we observe the time series during a period of T samples. The goal of the modeling is to determine
the model coefficients from the data. Multivariate least square estimation can be applied to this
problem without any further assumption. Let us develop a vector notation to solve the optimization
problem.
[
b
,...,
1
M
X x
=
=
[
,...
x
]
MxT
1
T
[ ,
,...,
]
(
)
A b W W
Mx ML
+
1
1
L
T
Z
=
[ ,
1
x
,...,
x
]
(
ML
+
1 1
)
x
n
n
n L
− +
1
Z Z
=
[
,...,
Z
]
(
ML
+
1
)
xT
0
T
1
U u
=
=
[
,...,
u
]
MxT
1
T
χ
vec[
X
]
M
Tx
1
vec[
]
(
2
)
α
=
A
M L M x
+
1
(3.2)
ν
=
vec[
U
]
MTx
1
 
Search WWH ::




Custom Search