Biomedical Engineering Reference
In-Depth Information
capabilities in myopic functional spaces. The fundamental idea of an ESN is to use a “large-reservoir”
recurrent neural network that can produce diversified representations of an input signal, which can
then be instantaneously combined in an optimal manner to approximate a desired response. There-
fore, ESNs possess a representational recurrent infrastructure that brings the information from the
past of the input into the present sample without adapting parameters. The short-term memory of
the ESNs is designed a priori, and the neuronal data are processed through a reservoir of recurrent
networks that are linked by a random, sparse matrix. Therefore, only the adaptive linear or non-
linear regressor (static mapper) is needed to implement functional mappings. Therefore, the train-
ing can be done online in O( N ) time, unlike the Wiener or the RMLP that have O( N 2 ) training
algorithms. Figure 3.15 shows the block diagram of an ESN. A set of input nodes denoted by the
vector
M
×
1
is connected to a “reservoir” of N discrete time-recurrent networks by a connec-
u
M
N
tion matrix
W . At any time instant n , the readout (state output) from the recurrent neural
network (RNN) reservoir is a column vector denoted by
×
in
N
n x . Additionally, an ESN can have
feedback connections from the output to the RNN reservoir. In Figure 3.15 , we show two outputs
(representing the X - Y Cartesian coordinates of hand trajectory) and the associated feedback con-
nection matrix,
×
1
[
N
×
2
= d .
The reservoir states are transformed by a static linear mapper that can additionally receive contribu-
tions from the input u . Each PE in the reservoir can be implemented as a leaky integrator (first-
order gamma memory [ 44 ]) and the state output or the readout is given by the difference equation
W
=
w
,
w
. The desired outputs form a 2D column vector
[
d
;
d
]
b
1
2
n
xn
yn
b
b
FIgURE 3.15: Block diagram of the echo state networks.
 
Search WWH ::




Custom Search