Information Technology Reference
In-Depth Information
3.2.3 Complex Activation Functions
The function of activation in the neuron of the CVNN is a complex-valued func-
tion unlike the RVNN where the functions was real-valued. The activation function
for CVNN ( C AF) is an extension of what existed as the function of activation in
RVNN but need improvisation to suit to complex variable based ambience. In view
of theory of complex variable and complex-valued function various researchers have
given different extension or formulation of C AF. A comprehensive investigation of
the complex domain error backpropagation learning had been tried with different
activation functions. This section presents two prominent and popular approaches of
defining complex-valued activation functions with their general properties resulted
on variety of applications. The complex function unlike real function is in two dimen-
sion because it is two variable function (real and imaginary parts), hence surface of
activation function is in three dimensional space as both real and imaginary parts of
the complex function are function of real and imaginary parts of variables.
3.2.3.1 Simple Extension of Real Activation Function
It is well known that a typical andmost frequently employed activation function in the
real domain has a sigmoidal behavior. Leung and Haykin [ 21 ], Kim and Guest [ 23 ]
have proposed a straightforward extension of this function to the complex domain
as:
1
f C (
z
) =
(3.1)
e z
1
+
where z
jy in the problem is the net input (weighted inputs) at the fan-
in to each neuron. This is referred to as Haykin activation function (in honor of
its discoverer) and is indeed analytic. Similarly, other commonly used activation
functions e.g. tanh
=
x
+
and exp z 2 are extended from real to complex domain. Though
the formulation of the complex-valued neuron with an analytic complex functions,
as given in Eq. 3.1 , seemed to be natural and to produce many interesting results,
however there was problem of unboundedness for some of the input values. Haykin
et al (1991) have proposed that this problem can be avoided by scaling the input data
to some region on the complex plane. The plots of activation function are shown in
Fig. 3.2 . The properties of analyticity and unboundedness can be appreciated from
the plots.
After some algebra, the real and imaginary parts of the Haykin activation function
(Eq. 3.1 ) are respectively:
(
z
)
e x
e x sin
+
(
)
(
)
1
cos
y
y
(3.2)
+
e 2 x
+
2 e x cos
(
)
+
e 2 x
+
2 e x cos
(
)
1
y
1
y
 
Search WWH ::




Custom Search