Information Technology Reference
In-Depth Information
i. P i := Genetic Operators( P i )
ii. P i := Rank and select fittest( P i ∪ P i )
(c) end while
(d) GP = GP ∪ P i
3. end for each
4. return (GP)
First, an initial Global Population GP is initialized and for each class C i ,an
initial population P i is generated in Step 2.a. Crossover and mutation operations
are applied to each pair in P i to generate the offspring population P i in Step
2.b.i. The next population is constructed by choosing good solutions from the
merged population P i ∪ P i . A ranking based on fitness function is performed
to select the fittest rules from the new generation to act as basis for the next
generation, steps 2.b.i and 2.b.ii are repeated to obtain the set of rules (classifier)
of the current class, in step 2.d the rule set P i is added to the GP. Step 2 is to
be repeated for each class, finally GP should contain all rules needed to identify
all classes.
3 Experimental Results
We have implemented several classifiers running on four benchmark data sets to
evaluate our approaches. The data sets chosen are the wine data, glass data, iris
data and breast cancer data. They are all available in the UCI machine learning
repository [23]. They all are real-world problems. We partition each data set
into two parts with an equal number of instances. One half is for training, and
the other half is for testing. We use the training data to train the rule set,
and test the generalization power of resulting rule set with the test data. All
experiments are completed on Pentium 1.8 GHz Dual Core, 1GB RAM PC. The
results reported are averaged over five independent runs. The parameters, such
as mutation rate used was 0.07, crossover rate was 0.7, generation limit used is
3000, and population size was 200.
We applied the four data sets to our proposed technique and also compared
them by running them on Kaya's GA [24] and a normal GA where all classes
are given as input without decomposition.
The proposed technique produced classification rules with higher accuracy
as can be seen in table 1. Besides achieving higher accuracy, performance was
increased through the parallel execution of the different modules of the problem.
Table 1. Comparison of the classifiers performance on the four test datasets.
Normal GA Kaya's Proposed Tech.
Wine
0.29
0.81
0.89
Glass
0.15
0.51
0.78
Iris
0.89
0.93
0.90
Cancer
0.33
0.47
0.64
 
Search WWH ::




Custom Search