Information Technology Reference
In-Depth Information
Despite the level of maturity they reached, intelligent systems still experience a lot of
difficulties to be accepted by the industrial community, which still sees them as academic
experiments or bizarre techniques and not as a powerful tool to solve their problems. Thus,
what still misses to a complete industrial maturity? This will be analyzed in the following.
2.1 Availability of expertise and training personnel
Knowledge and expertise on intelligent systems cannot easily be found in the industrial
domain, at least in decision-making people (namely: decision staff , businessmen and engineers ),
except perhaps in the newest generation (these are still too few and not yet high enough in
the decision-making stair). Decision-making people are the key actors for having intelligent
systems accepted in the industry. On the other hand, the real experts in intelligent systems
are those who have been trained for long time in the area of soft computing, but these often
have too little knowledge of the specific problem they are faced with , therefore they might not
tackle the problem in the most appropriate or efficient way.
Personnel training is rather time consuming, therefore costly, for industry and it can seldom
be afforded unless there is a reasonable guarantee to get appropriate returns. It must be
remembered that adopting any novel method may offer advantages, but it surely costs
money. The lack of good expertise, together with people laziness often leads to using
oversized networks, oversized training sets, conservative choices for paradigms and
learning coefficients, etc. Altogether, more complex (therefore more costly) networks, longer
design and training time, less advantages; in conclusion, less chance of acceptance .
2.2 The apparent diversity of neurofuzzy paradigms
At the beginning, neural networks, fuzzy systems and other soft computing techniques like
wavelet networks, Bayesian classifiers, clustering methods, etc., were believed to be
independent, although complementary, methods, which had to be analyzed and studied
independently of each other. This caused an excessive effort to study, analyze, get familiar
with a huge variety of methods and therefore to train personnel consequently. It was also
believed that each paradigm had its characteristics and preferred application domains, such
that a lot of experience was required to choose the best architecture for any application.
On the contrary, Reyneri (Reyneri, 1999) proved that most soft computing techniques are
nothing but different languages for a few basic paradigms. For instance, he proved that
Perceptrons, Adaline, Wavelet networks, linear transforms, and adaptive linear filters are
equivalent to each other. Also Fuzzy logic, Radial Basis, Bayesian classifiers, Gaussian
regressors, Kernel methods, Kohonen maps and fuzzy/hard c-means are equivalent
methods, as well as Local-global networks, TKS fuzzy systems and gain scheduling
controllers which are also equivalent.
With a good use of such neurofuzzy unification , the number of independent paradigms
reduces to as few as four. All known topologies for neural, fuzzy, wavelet, Bayesian,
clustering paradigms, etc. and supervised or unsupervised training algorithms are, in
practice, just particular implementations and interconnections of four elementary blocks,
namely: i) computing elements ; ii) computing layers ; iii) normalization layers and iv) sensitivity
layers . All traditional neurofuzzy paradigms are then nothing else than specific languages ,
each one being more appropriate to any given application.
Search WWH ::




Custom Search