Database Reference
In-Depth Information
Two types of learning, structure and parameter learning, are used concurrently
for constructing the SONFIN. The structure learning includes both the precondi-
tion and consequent structure identification of a fuzzy if-then rule. Here the pre-
condition structure identification corresponds to the input space partitioning and
can be formulated as a combinational optimization problem with the following
two objectives: to minimize the number of rules generated and to minimize the
number of fuzzy sets on the universe of discourse of each input variable. As to the
consequent structure identification, the main task is to decide when to generate a
new membership function for the output variable and which significant terms (in-
put variables) should be added to the consequent part (a linear equation) when
necessary. For parameter learning, based on supervised learning algorithms, the
parameters of the linear equations in the consequent parts are adjusted by either
LMS or RLS algorithms, and the parameters in the precondition part are adjusted
by the backpropagation algorithm to minimize a given cost function. The SONFIN
can be used for normal operation at any time during the learning process without
repeated training on the input-output patterns when online operation is required.
There are no rules (i.e., no nodes in the network except the input/output nodes) in
the SONFIN initially. They are created dynamically as learning proceeds on re-
ceiving online incoming training data by performing the following learning proc-
esses simultaneously, (A) input/output space partitioning, (B) construction of
fuzzy rules, (C) consequent structure identification, and (D) parameter identifica-
tion. Processes A, B, and C belong to the structure learning phase, and process D
belongs to the parameter learning phase. The details of these learning processes
can be found in [14].
8.3.3 Cascaded Architecture of a Neural Fuzzy Network with
Feature Mapping (CNFM)
This section gives the details of how to implement the cascaded architecture of the
unsupervised and supervised neural networks presented in Sections 8.3.1 and 8.3.2,
respectively. Compared to the conventional methods that use supervised neural
networks alone and select their inputs through trial and error, our system is com-
posed of two connective neural networks. The general architecture of CNFM is set
up as follows. First, we use three Kohonen' SOMs to reduce the dimensions of
three sets of inputs. These inputs are gray values, statistical properties, and fea-
tures from wavelet decomposition. Instead of using all sets of selected features as
the inputs of a supervised neural network, our Kohonen' SOM can transform each
set of features into 2D coordinates and use these low-dimension inputs as the input
of our supervised neural fuzzy network, SONFIN. Thus, no matter how many sets
and features are present in each set, we can transform the inputs into this simple
representation. Not only can we get a better representation, but we can obtain a
graceful meaning when the 2D coordinates serve as the inputs to a supervised
neural network.
To go into a little more detail, Fig. 8.10 shows the architecture of our system.
The first set of inputs contains three gray values, each of which comes from one of
Search WWH ::




Custom Search