Information Technology Reference
In-Depth Information
the number of variables increase or the equation becomes highly nonlinear. How-
ever, artificial neural networks can be taught to perform the mapping by treating
the dynamical system as a black-box and collecting the input and the corresponding
output values, and subjecting the network for training based on these data points.
It was observed in researches that initial data points are the most important factor
that affects the generalization performance of the neural network design, to other
parameters being complexity of the network, dimension of parameters in that order.
In practical applications hence, it is the mapping properties of the neural network
that are put to use. The validation of the complex-valued neural network (CVNN)
against the benchmarks was the first step, while the ability of these networks to map
the dynamics of the problems at hand is the step of practical importance.
The mapping properties of the CVNN are the subject of the present chapter. In
actual application, the form of the mapping is unknown but error backpropagation
algorithm-based neural network captures the mapping [subject to Kolmogorov con-
ditions (Kolmogorov 1957)] through data points. The learning convergence theorem
for complex variables (Nitta 1997) is the assurance one needs to establish complex
weights exist that solve the mapping problem at hand. To reach to the point in the
weights space, the process of training based on gradient descent is employed. The
method involves computing the complex gradient and updating the weights based
on the slope as obtained from the gradient formula. Though, gradient descent based
neural networks have been used extensively to tackle the problems posed by the
industry. On the other hand, the CVNN's mapping properties per se have not been
studied in literature as yet. Nitta [ 5 ] reported some problems of mapping to bring
forth the differences between CVNN and ANN; later B K Tripathi (2010) extended
mapping problems to more wide spectrum. The present chapter investigates and
explores the mapping properties of the CVNN where split-type activation function-
based networks are arranged in a performance echelon for each problem studied.
The CVNN learns to capture the transformation (magnitude and argument), these
networks can be employed to perform the map the vector fields on plane.
5.2 Conformal Mapping on Plane
The complex number is directly related to two-dimensional plane. Variety of map-
pings or transformations on a plane are used to solve a number of mathematical and
practical engineering problems [ 1 , 4 ]. Such a mapping on complex plane preserves
the angles between oriented curves and the phase of each point on the curve is also
maintained during transformation. As described in [ 2 , 5 ], the complex-valued signals
flowing through a complex domain network are the unit of learning, which enable to
learn 2D motion of signals. In contrast, a neural network in a real domain adminis-
ters single dimension motion of signals. This is the main reason as to why a neural
network extended to a complex domain which can learn mappings on plane, while an
equivalent real domain network cannot. Conformal mapping is used to map the com-
plicated regions conformally onto simpler, standard regions, where boundary value
 
Search WWH ::




Custom Search