Information Technology Reference
In-Depth Information
–
Exploring the relative importance of attribute to objective classes;
–
Choosing the most important attributes to classify objects, and delete
redundant attributes;
–
Extracting decision rules with LEM2 algorithm or Explore algorithm;
–
Post-processing after acquiring rules;
–
Classifying new objects with decision rules;
–
Evaluating decision rule sets with k-cross validation method.
3. KDD—R
KDD—R is a variable precision rough set model based KDD system developed
by Regina university of Canada. It used the decision matrix based method in
knowledge discovery field. The system was used to analyze medicinal data, and
generate new relations between symptoms and states of illness. Besides, it also
supported the market exploration of telecommunication industry. The system is
composed by four parts:
–
Data preprocessing;
–
VPRS model based attribute dependency analysis, and redundant attribute
reducing;
–
Rule extraction;
–
Decision making.
4. Rough Enough
Based on rough set theory, Troll Data Inc. company of Norway developed a data
mining tool —Rough Enough. Readers can download the software from the
Website http://www.trolldata.no/renough.
Rough Enough can calculate discernibility matrix from information systems.
It provided many tools to process approximate sets, such as equivalence classes,
decision classes, lower approximate sets, upper approximate sets, boundary fields,
rough membership values, extensive decision rules and etc. The reducts are
generated by genetic algorithms.
11.7 Granular Computing
Broadly speaking, granular computing may be considered as a label of a new
field of multi-disciplinary study, dealing with theories, methodologies,
techniques, and tools that make use of granules (i.e., groups, classes, and clusters)
in the process of problem solving. A granule is usually composed by elements,
Search WWH ::




Custom Search