Information Technology Reference
In-Depth Information
algorithm would perform, working with the constraints of analyticity and/or bound-
edness, when applied to various problems. In most problems of practical interest that
involve an optimization process, a quadratic error function is chosen and subject to
an optimization. It was pointed out in the literature (Werbos and Titus 1978; Gill and
Wright 1981; Fernandez 1991) that employing a different error function can improve
the performance of an optimization scheme. Moreover in the m-Estimators approach
to data analysis (Rey 1983), a number of functions that can effectively serve as error
functions have been listed. It was shown that the new error functions have the ability
to suppress the ill-effects of the outliers and exhibit a robust performance to noise and
outperform the standard quadratic functions when applied to optimization problems
involving data with a scatter of outliers. Hence the question of how the error BP
algorithm would perform when the error function is varied, immediately comes up.
Moreover, recently it has been shown that the complex-valued neural networks help
to solve real-valued problems more efficiently than their real-valued counter-parts
[ 21 , 26 ]. Since then, several complex-valued neural networks have been developed
to solve real-valued problems [ 9 , 14 , 34 , 35 ]. Further, the topic extends the theory
and practice for vector-valued neural networks for those who are interested in learn-
ing more about how to best characterize the systems and explore experimentally in
three-dimensional neurocomputing.
The scientific community believe that for artificial intelligence to be a reality, that
is system to be as intelligent as us, we require relatively simple computational mech-
anism, then only one can simulate the real aspects of human intelligence in broad
way and can totally achieve it. Unfortunately, there is lack of compiled literature
aimed at providing clear and general methods for high dimensional neurocomputing
tools at the systems level. More practically talking, we also desire to promote the
current trend for experimentalists to take sincerely the insights gained from using
high-dimensional computing. The explicit methodology provided and many exam-
ples presented in the topic are intended to show precisely how high-dimensional
neurocomputing can be used to build the kind of mechanism that scientific commu-
nity and other readers can exploit.
This topic is great for students and professionals as well as would be a nice
supplement for researchers taking career in a new generation of neural networks.
Again, I cannot believe how readable the topic is. The prospective readers who are
not familiar with this wonderful philosophy would certainly enjoy the topic. Each
chapter start with description of strategy in general terms which outline the form
that computation will take in detail inside the different sections. The major goal
of the organization of chapter is to emphasize the arts of synthesis and analysis of
neurocomputing. The respective chapters are organized as follows:
This chapter introduces the reader with the evolution of computational neuro-
science and chronological developments in the history of neurocomputing. This also
presents the supremacy of high-dimensional neurocomputing in various recent works
which continue to be a compelling reference work for advances in machine learning
and intelligent system design.
Chapter 2 is devoted for those interested in knowledgemore about how their quan-
titative representation of signals flowing through ANN relate to the neurocomputing
Search WWH ::




Custom Search