Information Technology Reference
In-Depth Information
and the low computational cost have favoured the development of on-line self-tuning
and self-adapting strategies to on-line adjustment, and in some cases the creation of
fuzzy systems. The so-called Evolving Intelligent Systems emerged in this context
as an alternative to update the knowledge and to refine the model through interaction
with the environment. The main advantage for modeling and controling processes
is that the structure of the Evolving Intelligent System changes depending on what
the process demands, unlike the current intelligent control techniques which are
focused basically on fixed control structure. Moreover, if we conjunctively apply the
evolving system paradigm and the hybridization of neural networks and fuzzy logic,
very promising solutions can be obtained. Therefore, Evolving Neuro-fuzzy Systems
emerge as alternative control tool to deal with complexity (Lin et al. 2008 ).
On the other hand, in terms of learning procedures, most evolutionary neuro-fuzzy
strategies apply inductive reasoning systems. The key issue in inductive reasoning
is to find a general model (function) drawn from the entire set of input/output data
that represent the whole system. The model is then used for designing the required
control system. In contrast, there are transductive reasoning methods that generate
a model at a single point in the workspace. The dynamic generation of local models
represents the knowledge as the set of known data, which facilitates an incremental
expansion on-line learning. In addition, these strategies are capable of functioning
correctly with a small training set. In this section, we analyze. A transductive on-line
neuro-fuzzy strategy for tuning a fuzzy controller is described in this chapter. TNFIS
is selected due to its interesting characteristics and properties.
If input/output data are available, neuro-fuzzy strategies become a feasible option
for process improvement. The transductive neuro-fuzzy inference system (TNFIS)
(Gajate et al. 2010 ), inspired by Song and Kasabov's approach (Song and Kasabov
2006 ), involves the creation of local models for each subspace of the problem using
the Euclidean distance. Euclidean distance is selected from among other methods
(e.g., Mahalanobis) with the aim of obtaining a fast time response with minimal
computational overhead. Moreover, TNFIS uses a Mamdani-type inference method,
and membership functions are typically Gaussian-type. This type of membership
function is derivable, enabling the use of supervised learning algorithms such as
back-propagation algorithms. A representative scheme for inductive reasoning by
ANFIS is shown in Fig. 7.4 a. The diagram of a typical TNFIS topology is depicted
in Fig. 7.4 b.
Local models are created using data from the training set that are the closest
to each new input datum. The Euclidean distance is used to select each data subset
( 7.1 ), in other words, the nearest neighbours . The size of the subset of neighbours
( N q ) is one input parameter for the algorithm. Once it has computed the nearest
neighbours, the algorithm calculates the weights for each calculated distance ( 7.2 ):
1
2
P
1
P
2
x
k
=
1 |
x j
k j |
(7.1)
j
=
 
Search WWH ::




Custom Search