Information Technology Reference
In-Depth Information
1 k
T
g
¦
T
l
(7.7)
k
lQ
where T is a vector of the consequents parameters of rule R l as described in
Chapter 4. The output of the Takagi-Sugeno model can now be calculated as
§
·
l
l
g g
E
y
E
y
¦
¨
¸
©
¹
ll Q
,
y
§
(7.8)
·
E
l
k
E
g
¦
¨
¸
©
¹
ll Q
,
For the Takagi-Sugeno model, a substitution of the k -rules with equal common
parts by one general rule R g yields the same input-output mapping. In the above
equation, it is assumed that all rules in the initial rule base have a weight
w . A
1
similar expression can be derived for any rule weights.
Another approach is to re-estimate the consequent parameters in the reduced
rule base using the training data with the help of the least squares error technique
as described in Chapter 4. This requires more computations, but it usually gives a
numerically more accurate result than the averaging in the above equation, since it
enables the consequents to adapt to the new rule base. However, re-estimation of
all rules consequents is the preferred approach using the training samples relying
on the least squares error approach.
7.6 Rule Base Simplification Algorithms
Based on the discussions above, an algorithm is now presented for rule base
simplification in Takagi-Sugeno models. The same procedure, carried out in three
operational steps, can also be used for Mamdani-type fuzzy models.
x Simplification , achieved by merging similar fuzzy sets and by removing
fuzzy sets similar to the universal set.
x Dimensionality reduction , achieved by removing redundant (similar)
premise partitions.
x Rules reduction , achieved by merging rules whose premise parts have
become equal as a result of the two previous steps.
The approach uses the Jaccard similarity measure (7.3) for determining the
similarity between the fuzzy sets in the rule base and requires three threshold
values within [0,1], namely the O for merging fuzzy sets that are mutually similar,
J for removing fuzzy sets similar to the universal fuzzy sets, and K for removing
the redundant input partitions. The values of J and K should be relatively high to
ensure that the model's performance will not be deteriorated. As pointed out by
Setnes (2000), in many applications the values of J = 0.8 and K = 0.8 have given
good results and are used as defaults in the algorithm, but the selection of a suitable
 
Search WWH ::




Custom Search