Information Technology Reference
In-Depth Information
100
1
90
RNN
FNN
80
10 -1
70
10 -2
60
50
10 -3
40
30
20
RNN (97.3%)
FNN (91.3%)
10 -4
10
0
10 -5
20
40
60
80
100
20
40
60
80
100
Iteration
Iteration
(a) Mean success rate
(b) Mean best error
Fig. 5. Result of comparative evolutionary performance test.
1.2
1.2
1
Start
1
0.8
0.8
0.6
0.6
0.4
0.4
Mass
Jib tip
Mass
Jib tip
0.2
0.2
0
0
End
Start
End
-0.2
-0.2
-1
-0.5
0
0.5
1
-1
-0.5
0
0.5
1
X [m]
X [m]
(a)
θ 0 = π /
2
(b)
θ 0 = π
Fig. 6. Trajectories of the load mass on X - Y plane.
in the FNN is H
=
7, and the initial population range is
[
1
.
5
,
1
.
5
]
; this range is
then applied to the RNN. In the RNN, only H
2 hidden neurons are used, which
is set to be small with respect to computational cost. Namely, the structures of the
FNN and RNN are 4
=
1, respectively. It should be noted that the
above initial population range and the number of hidden neurons used for the RNN are
not the optimal values for its performance. In both networks, we use a linear function
f io . act (
7
1and4
2
x as the activation function of the input and output layers and a hyperbolic
tangent function f h . act (
x
)=
for the hidden layer.
The mean success rates and mean global best errors of the PSO in the Design using
the RNN and FNN are respectively shown in Figs. 5 (a) and (b). It turns out that the
RNN could obtain a higher mean success rate and lower mean error compared to the
FNN (the rates at the end of the evolution for the two networks are 97.3% and 91.3%).
ii) Computational cost: The tests have been implemented in C programming lan-
guage ran on a PC with a Duo-Core 2.2 GHz CPU and 2 GB of RAM running Ubuntu
Linux. It was measured that a single run of the Design using the FNN took 1.89 sec-
onds, while it was only 1.08 seconds in the case of using the RNN. Clearly, the use of
the RNN noticeably reduced computation time.
x
)=
tanh
(
x
)
Search WWH ::




Custom Search