Biomedical Engineering Reference
In-Depth Information
x 1 (t)
z -1
x N (t)
+
z -1
Weight
Weight
y 1 (t)
y 2 (t)
+
z -1
Matrix
Matrix
z -1
+
W 1
W 2
FIgURE 3.7: TDNN topology.
function or a hyperbolic tangent [ 23 ]). A full description of the TDNN can be found in References
[ 23 , 33 ]. One of the appeals of the input based TDNN is that it is a dynamic neural network but
still utilizes the conventional backpropagation training of the MLP [ 33 ].
The TDNN topology (Figure 3.7 ) is more powerful than the linear FIR filter. In fact, look-
ing at the hidden layer PE just before the nonlinearity, we can recognize an adaptive FIR filter.
Therefore, the TDNN is a nonlinear combination of adaptive FIR filters, and has been shown to
be a universal approximator in functional spaces [ 36 ], provided there is enough memory depth and
sufficient number of hidden PEs. Alternatively, each of the hidden PEs outputs can be thought of
as an adaptive basis obtained nonlinearly from the high dimensional input data, which defines the
projection space for the desired response. Then, the best (orthogonal) projection of the desired hand
movements can be obtained on this projection space. Notice that the TDNN can be considered as
a “double” adaptive system, where both the axes of the projection space and the optimal projections
are adapted. This conceptual picture is important to both help configure the TDNN topology and
the training algorithmic parameters.
The output of the first hidden layer of the network can be described with the relation y 1 ( t ) =
f ( w 1 x ( t )) where f (.) is the hyperbolic tangent nonlinearity (tanh( β  x )). 2 The input vector x includes
L most recent spike counts from N input neurons. In this model the delayed versions of the firing
counts, x ( t L ), are the bases that construct the output of the hidden layer. The number of delays in
the topology should be set that there is significant coupling between the neuronal input and desired
signal. The output layer of the network produces the hand trajectory y 2 ( t ) using a linear combina-
tion of the hidden states and is given by y 2 ( t ) = w 2 y 1 ( t ). The weights ( w 1 , w 2 ) of this network can
2 The logistic function is another common nonlinearity used in neural networks.
 
Search WWH ::




Custom Search