Information Technology Reference
In-Depth Information
of real sigmoid function [ 17 ]. Hence evolved a complex-valued neuron based on
boundedness property of function in complex domain in place of analyticity. Later
TNitta (1997) andBKTripathi (2010) have formally compiled the 2D transformation
property and wide acceptability in variety of tasks, dealing with split-type function
[ 7 , 18 , 22 ]. They confirmed the ability of this function to handle magnitude-phase
relationship properly during learning. It is also important to mention here that the
development of CVNN is becoming more and more popular not only in complex-
valued problem but also in real-valued problem [ 3 - 6 , 8 ].
This generated an increased interest among researchers to develop many other
neurons based on different architecture and function of activation over the field
of complex domain. In 2007, Aizenberg formally presented the multi valued neuron
(MVN) [ 3 ] using a multiple-valued threshold logic to map the complex-valued inputs
to discrete outputs using a piecewise continuous activation function, which maps the
complex plane into the unit circle. TheMVN learning is reduced to progress along the
unit circle which is based on a straightforward linear error correction rule and does
not involve a derivative of activation function. In 2009, Murase developed complex-
valued neuron with phase encoded inputs [ 4 , 13 ] specially for real-valued problems.
He obtained complex-valued features by phase encoding the real-valued features
between
e ( i ˀ x t ) ; where x t are the real-valued
[
0
; ˀ ]
using the transformation z t
=
input features normalized in
. He proposed [ 4 , 13 ]two C AF that map complex
values to real value outputs, by dividing the net potential (weighted summation) of
neuron into multiple regions for identifying the classes (like real-valued neurons).
Both the functions are differentiable with respect to real and imaginary parts of the net
potential which in turnmakes it possible to derive the gradient based learning rules. In
contrast to single real-valued sigmoid neuron which saturates only in two regions and
can only solve linearly separable problem; the single complex-valued neurons devel-
oped by Murase saturates in four regions, hence significantly improves their classi-
fication capability. This idea of phase encoding, to transform the real-valued input
features to the Complex domain, is extended by Sudararajan to develop PE-CELM
(2011) [ 5 , 14 ] and fullyCRBF classifier (2012) [ 6 ] for real-valued classification prob-
lems. The phase encoded transformation maps the real-valued input features into the
first and second quadrants of the complex plane, completely ignoring the other two
quadrants. Therefore, the transformation does not completely exploit the advantages
of the orthogonal decision boundaries. In 2013, Suresh circumvented this issue by
employing circular transformation to map the real-valued input features to the com-
plex domain, in fully complex-valued relaxation neural network (FCRN). The circu-
lar transformation effectively performs a one-to-one mapping of the real-valued input
features to all four quadrants of the complex domain. Hence, it efficiently exploits the
orthogonal decision boundaries of the FCRN classifier. FCRN is a single hidden layer
network, with a Gaussian-like hyperbolic secant function (sech) in the hidden layer
and an exponential function in the output layer for neuron's activation. He claimed to
approximate the desired output more accurately with a lower computational effort.
Majority of researchers in complex domain have accepted that the complex-valued
neuron with a split-type C AF is very easy to implement, provide superior decision
making ability and effectively applicable in all problems in complex domain.
[
0
,
1
]
Search WWH ::




Custom Search