Information Technology Reference
In-Depth Information
100
RTRL
RTRL−ZED
90
80
70
60
50
40
30
20
10
0
0
50
100
150
200
250
NS
(a)
100
RTRL
RTRL−ZED
90
80
70
60
50
40
30
20
10
0
0
50
100
150
200
250
NS
(b)
Fig. 6.13 Percentage of convergence on 100 experiments versus the correspondent
average number of symbols necessary for learning the problem (NS), for the stan-
dard RTRL and the RTRL-ZED. First experiment. a) Network with 4 neurons. b)
Network with 6 neurons.
Recurrent Learning (RTRL) [187] in problems where the need to retain in-
formation for long time intervals exists [103]. LSTM is able to deal with this
problem since it protects error information from decaying using gates.
In this section it is shown how to adapt the LSTM for learning long time
lapse classification problems using MEE and several experiments are pre-
sented showing the benefits that can be obtained from this approach.
 
Search WWH ::




Custom Search