Information Technology Reference
In-Depth Information
The efficient use of neurofuzzy unification would simplify personnel training, since: i) it helps
the practitioner to quickly learn and get familiar with the very few basic paradigms; ii) it
augments flexibility and performance of intelligent systems; iii) it therefore increases the
economical return and iv) reducing the corresponding risks. Nevertheless, neurofuzzy
unification is still far from being widely applied, for several historical reasons. The
permanence of tens of apparently different paradigms often still creates confusion , noise ,
disaffection ; it increases personnel training costs and reduces advantages ; altogether it
significantly reduces the appeal of neurofuzzy systems .
3. Relevant characteristics for industry
In this section, we try to analyze here some of the reasons why intelligent systems still
experience difficulties in being accepted as an industrial standard.
3.1 Crypticity
Many intelligent systems are often felt to be rather cryptic , in the sense that nobody can
really understand why and how a trained network solves a given problem. Apart from the
many theoretical proofs that an intelligent system is capable of solving a large variety of
problems, the real industrially-relevant question is that all the knowledge of a trained
network is hidden within a chunk of numbers, usually arranged into weight or centre matrices
or genomes . There is usually no clue on how to interpret such “magic numbers”, thus
engineers are often sceptical in regards to their correctness, reliability or robustness.
In practice, correctness of weights is based on a successful training, although it is often
difficult to either guarantee or feel that training has properly succeeded. Quality of training
is measured on the amount of a residual error measure , but there is often no indication on
which is an appropriate value for this error, especially when sum-of-errors measures are used,
as in several commercial simulation tools. The user cannot reliably argue that a trained
model is really representative of the desired system/function.
Furthermore, most training processes are often based on some amount of randomness , which
is seldom appreciated in the industrial domain. On the other hand, traditional design
methods (namely, those not using intelligent systems) are based on some predictable
analytical or empirical model which is chosen by the designer , together with its parameters.
Designer's knowledge and experience usually provide enough information to properly solve a
problem, even though seldom in an optimal way. Nothing is apparently left to randomness.
In reality, the process of empirical adaptation of a given analytical or empirical model to a
given system resembles the approach of training/adapting a soft computing system (which
is nothing but a highly generic parametric model) based on a set of training data. Yet
everybody considers the former as normal and straightforward, while most designers are
still sceptical when facing the latter. Why is that so?
One of the reasons is that traditional (namely, non-intelligent) parametric models currently
used in practice are much less generic than any soft computing models; therefore they are
always under total control of the engineer, who is capable of properly interpreting
parameters and values.
Search WWH ::




Custom Search