Information Technology Reference
In-Depth Information
multi-output neuro-fuzzy network can predict this data region with reasonably high
accuracy.
Table 6.1(a). Training and forecasting performance of Takagi-Sugeno-type multi-input
multi-output neuro-fuzzy network with proposed backpropagation algorithm for electrical
load time series.
Sl. No.
Final SSE with pre-scaled data
(scale factor = 0.001)
Final SSE, MSE, RMSE
with original (nonscaled) data
1.
SSE = 23.8580
SSE = 2.3858e+005
(with training data 1 to 1500)
MSE1 = 30.0772,
MSE2 = 72.8630,
MSE3 = 135.6401;
SSE1 = 3.0077,
SSE2 = 7.2863,
SSE3 = 13.5640
(with training data 1 to 1500)
RMSE1 = 5.4843,
RMSE2 = 8.5360,
RMSE3 = 11.6465
(After training with
backpropagation algorithm)
2.
SSE = 53.5633;
SSE1 = 6.8169,
SSE2 = 16.6395,
SSE3 = 30.1069,
(with training and validation
data points 1 to 3489)
SSE = 5.3563e+005
(with training and validation data
points 1 to 3489)
Figure 6.7(a) and Table 6.1(a) demonstrate that the proposed backpropagation
algorithm brings the sum squared error as the performance function smoothly from
its initial value of 324.6016 down to 23.8580 in 300 epochs, whereas Figure 6.7(c)
and Table 6.1(b) demonstrate the training performance with the proposed
Levenberg-Marquardt algorithm. In the latter case the performance function is
brought down to 22.5777 from its initial value of 868.9336 within just 200 epochs,
indicating the much higher convergence speed of the proposed Levenberg-
Marquardt algorithm in comparison with the backpropagation algorithm.
Furthermore, the sum square error plots in both Figure 6.7(a) and Figure 6.7(c)
show that the training does not exhibit much oscillation. The results illustrated in
Figure 6.7(b) and Figure 6.7(d) and also in Table 6.1(a) and Table 6.1(b) clearly
show the excellent training and forecasting performance of the Takagi-Sugeno-
type multiple-input, multiple-output neuro-fuzzy network with the proposed
training algorithms.
Search WWH ::




Custom Search