Image Processing Reference
In-Depth Information
A homogenous image contains very few dominant gray tone transitions, and therefore the
normalized entry of the co-occurrence matrix for this image will have fewer entries of larger
magnitude resulting in large value for energy feature. In contrast, if the normalized entry
of the co-occurrence matrix contains a large number of small entries, the energy feature will
have smaller value. The second feature is entropy which measures the disorder of an image
and it achieves its largest value when all elements in normalized entry of the co-occurrence
matrix are equal. When the image is not texturally uniform many GLCM elements have
very small values, which implies that entropy is very large. Therefore, entropy is inversely
proportional to GLCM energy. The third feature is contrast which is a difference moment
of the normalized entry of the co-occurrence matrix and it measures the amount of local
variations in an image. The last feature is the inverse difference moment. It measures
image homogeneity. This parameter achieves its largest value when most of the occurrences
in GLCM are concentrated near the main diagonal. Inverse difference moment is inversely
proportional to GLCM contrast [For more details, the reader can see (Aboul Ella, 2007)].
Rough sets analysis
One way to construct a simpler model computed from data, easier to understand and with
more predictive power, is to create a set of minimal number of rules. Some condition values
may be unnecessary in a decision rule produced directly from the database. Such values
can then be eliminated to create a more comprehensible minimal rule preserving essential
•Pre-processing stage. This stage includes tasks such as extra variables addition
and computation, decision classes assignments, data cleansing, completeness, cor-
rectness, attribute creation, attribute selection and discretization.
•Analysis and Rule Generating stage. This stage includes the generation of pre-
liminary knowledge, such as computation of object reducts from data, derivation
of rules from reducts, rule evaluation and prediction processes.
•Classification and Prediction stage. This stage utilize the rules generated from
the previous phase to predict the stock price movement
The computation of the core and reducts from a decision table is a way of selecting
relevant features (Bazan, Nguyen, Nguyen, Synak, and Wroblewski, 2000; Starzyk, Dale,
and Sturtz, 1981). It is a global method in the sense that the resultant reducts represent the
minimal sets of features which are necessary to maintain the same classificatory power given
by the original and complete set of attributes. A straighter manner for selecting relevant
features is to assign a measure of relevance to each attribute and choose the attributes with
higher values. Based on the reduct system, we generate the list of rules that will be used
for building the rough net classifier model for the new objects (Aboul Ella and Dominik,
Rough Neural Classifier
Rough neural networks (Henry and Peters, 1996; Peters et al., 2001, 2000) used in this study
consist of one input layer, one output layer and one hidden layer. The input layer neurons
accept input from the external environment. The outputs from input layer neurons are fed
to the hidden layer neurons. The hidden layer neurons feed their output to the output layer
Search WWH ::

Custom Search