Information Technology Reference
In-Depth Information
End if
End for
Output : A set of general rules GRULE
The rule generation algorithm initializes general rules GRULE to an empty set and copies one rule r i
RULE to rule r . A condition is dropped from rule r , and then rule r is checked for decision consistency
with every rule r j ∈RULE. If rule r is inconsistent, then the dropped condition is restored. This step is
repeated until every condition of the rule has been dropped once. The resulting rule is the generalized
rule. Before rule r is added to GRULE, the rule is checked for rule redundancy. If rule r is logically
included in any rule r a ∈GRULE, rule r is discarded. If any rules in GRULE are logically included in
rule r , these rules are removed from GRULE. After all rules in RULE have been processed, GRULE
contains a set of general rules.
The goal of classification is to assign a new object to a class from a given set of classes based on
the attribute values of this object. To classify objects, which has never been seen before, rules gener-
ated from a training set will be used (Refer to Algorithm-3). The classification algorithm is based on
the method for decision rules generation from decision tables. The nearest matching rule is determined
as the one whose condition part differs from the attribute vector of re-image by the minimum number
of attributes.
Algorithm-3: Classification of a new object.
Input: A new image to be classified, the attribute vector of the new image, and the set of rules
Processing:
Begin
For each rule in Rule set Do
If match (rule, new object) Then
Measure = |Objects|, K→|Classes|;
For i=1 to K Do
Collect the set of objects defining the concept X i
Extract Mrule(X i ,u t ) = {r Rule}
For any rule r Mrule(X i ,u t ) Do
T=Match A (r ) X i and LL=LL T ;
Strength =Card(LL)/Card(X i )
Vote = Measure* Strength
Give Vote(Class(Rule),Vote)
Return Class with highest Vote
End
Output: The final classification
A similarity measure is required to calculate the distance between a new object with each object in
the reduced decision table and classify the object to the corresponding decision class.
In this chapter, three different distance functions are used. They are Euclidean, Histogram
and quadratic distance functions, which are defined below. The h e and h p are M-dimensional
histograms. The h e [ m ] and h p [ m ] are the frequencies of an element in bin m of histogram h e
and h p , respectively.
Search WWH ::




Custom Search