Information Technology Reference
In-Depth Information
and discuss how to understand the high-dimensional neurocomputing using familiar
techniques of vector algebra and hypercomplex number system. Now, if readers are
not familiar with the generalization of real numbers such as complex numbers and
quaternions, this is an excellent introduction to them. The comprehensive extension
in the dimensionality of neurocomputing originates a sense of which neural systems
are appropriate targets for particular kinds of computational modeling; and how to go
about modeling such systems. This concept is important for those readers who are
less accustomed with the wonderful high-dimensional neurocomputing in general
would certainly enjoy the later parts of this topic.
Chapter 3 sets the stage for the in-depth coverage of neurocomputing in complex
domain. Another work focusing on various types of neural activation functions and
their differentiability can be found in the chapter. Next is a discussion of different
types of activation functions followed by a presentation of error functions. The con-
ventional error BP algorithm minimizes a quadratic error function by steering the
weights along the direction of negative gradient (using the update rule), the chapter
points out alternative error functions that can effectively better the performance of
a neural network. Next, more advanced training methods are given for error BP
learning.
Chapter 4 begins with a discussion of basic neuron models that are the build-
ing blocks of neural system. From recent publications, chapter mentioned that the
neuronmodels with nonlinear input aggregation have better computational and learn-
ing properties than conventional neurons. The use of nonconventional neural units
(higher-order neuron) appears to take over popularity over conventional in most
recent publications. They offer adjustable strong nonlinear input-output mapping
without local minima issue due to nonlinearity of the neural architecture itself. This
chapter introduces two compensatory higher-order neural units; and one general-
ized higher-order neural units as the various existing standard models are its special
case. The theoretical derivations of learning rules are supported by examples that
demonstrate the superiority of presented approach.
Chapter 5 presents functional mapping properties of high-dimensional neurons to
demonstrate their phase approximation capabilities. For phase approximation, one
does not need to use an error function which simultaneously minimizes both the
magnitude and phase errors. Because of inbuilt nature of complex numbers (real
and imaginary parts along with embedded phase information), which flow through
complex-valued neural network. Thus, the learning algorithm achieves convergence
not only with respect of magnitude but also with respect to phase. Therefore, during
functional mapping (transformation) the phase (angle) property of each point in a
geometric structure remains preserved not only in magnitude but also in sense. The
illustrative examples in the topic demonstrate the phase preserving property through
variety of mapping problems on plane (conformal mapping). The concept presented
in this chapter has wide spectrum of applications in science and engineering, which
further need investigation.
Chapter 6 formulates a model neuron that can deal with three-dimensional signals
as one cluster, called 3D real-valued vector neuron. The basic learning rules for
training sophisticated network of three-dimensional vector neuron are covered. The
Search WWH ::




Custom Search