Information Technology Reference
In-Depth Information
with complexity analysis was presented. He also pointed out certain problems that
the standard BP fails but the C BP manages to solve. He and later Tripathi (2009)
prefered to incorporate nonanalytic (so called “split”) but bounded activation func-
tion and demonstrated the reasons of his preference. Meanwhile, TAdeli et al. (2003)
fully complex-valued neural network with different analytic activation functions with
the arguments to overcome the issues related the unboundedness of function [ 5 ]. The
set of problems to which the C BP can be successfully applied is a research theme
in the light of the problems that the C BP solves but the BP fails to do so. From
2009 to 2012, Tripathi did an exhaustive study over related researches and compiled
this diverse field into a comprehensive and novel complex-valued neurocomputing.
All these works were endeavors toward the development of high-dimensional neu-
rocomputing along with its performance evaluations and wide applicability.
Various researchers have independently proposed extension of real-valued neu-
ron (one dimension) to higher dimensions [ 10 , 16 - 18 ]. Most of them have followed
natural extension of number field like real number (one dimension), complex num-
ber (two dimension), 3D real-valued vectors (three dimension), quaternion (four
dimension), etc., for representation of higher dimension neurons. Complex-valued
neural network has received much attention from researchers in the recent past. It
can directly operate with 2-D information. Thus, it is more significant in problems
where we wish to precisely learn and analyze signal amplitude and phase simultane-
ously. The researches in biophysics highlight the fact that the action potential in the
human brain may have different pulse patterns and the distance between pulses may
be different. This justifies the introduction of complex numbers representing phase
and amplitude into neural networks. CVNN provide efficient solution for complex-
valued problems. Tohru Nitta [ 8 ] and Tripathi [ 11 ] had shown that the number of
learning parameters needed with CVNN for a complex-valued problem is approx-
imately half the equivalent RVNN, even when each complex-valued parameter is
counted as two. They also shown that time complexity per learning cycle is same in
both networks, but the number of learning cycles in CVNN are much less than the
equivalent RVNN. CVNN is more general in the sense that it can provide efficient
solution for functions in single dimension as well as on a plane (two dimension). The
performance of CVNN has also scored over RVNN even on real-valued problems.
For example in [ 19 , 20 ], a single layer CVNN has successfully solved the XOR
problem, which cannot be solved by a single layer RVNN. Moreover, an efficient
solution for many real-valued problems has been achieved with CVNN in few recent
publications [ 14 , 21 , 22 ]. They are universal approximators. For the problem of
same complexity, they also require smaller network topology (hence lesser number
of learning parameters) and lesser training time to yield better and more accurate
results in comparison to equivalent RVNN. Its 2D structure of error propagation
reduces the problem of saturation in learning and offers faster convergence.
The application field of CVNN is very wide. It is not easy to imagine areas dealing
with 2-D parameters without the realm of complex numbers. Applications in new
technologies such as robotics, signal processing, intelligent systems, communication,
space and ocean technology, medical instrumentation as well as in older technologies,
namely control and prediction problems are creating a wide spectrum of examples
Search WWH ::




Custom Search