Databases Reference
In-Depth Information
Table 9.6.
Comparison Performance among MLP, FSN and OPFSN.
Databases
Hit Percentage in the training set
Hit Percentage in the test set
MLP
FSN
OPFSN
MLP
FSN
OPFSN
IRIS
97.33
95.00
99.96
98.66
98.86
99.46
WINE
97.19
88.31
98.45
96.63
98.20
96.60
PIMA
74.35
74.36
79.56
84.11
80.31
76.69
BUPA
77.10
69.22
76.56
78.84
76.51
69.86
phase and a competitive performance with MLP. In case of test phase WINE
database is giving a competitive performance, but performance of BUPA
and PIMA databases is less than the performance of MLP. The best average
results in Table 9.6 are made bold to distinguish it from other results.
The training phase and test phase results are also presented separately
in Table. 9.6 and Fig. 9.7 for different models and for different databases.
Again the same datasets are simulated with OPFSN Model in a wrapper
approach. We allow the PSO to select different sets of features starting
from set cardinality one to ten. We perform 20 simulations for selection of
each set of features and the average classification accuracy is considered for
comparison. Figure 9.8 shows the mean value of average results obtained
from exposing train and test set to the model while different sets are taken
for training.
Fig. 9.7. Comparison of average classification accuracy of MLP, FSN and OPFSN for
the training sets, X-axis values represent 1: IRIS, 2: WINE, 3: PIMA, and 4: BUPA
databases.
Search WWH ::




Custom Search