Information Technology Reference
In-Depth Information
the overall output of the single neuron system as shown in Fig. 4.1 . These models
may be equally used in real domain discarding the imaginary part.
4.3.2 Model-1
Neuron modeling concerns with relating to the structure of the neuron on the basis
of its operation. The conventional neuron models in real and complex domain are
generally based on summation and radial basis aggregation functions. The traditional
MLP, uses summation function as the basis function, offers global approximation
to input-output mapping, but it may be plagued by long learning time and has a
tendency to get trapped at bad local minima. On the contrast, RBF network, often
uses the Gaussian function as the basis function, offers local approximation to input-
output mapping and often provides a faster and efficient learning. However, it will
be inefficient in approximating constant-valued functions, as addressed in [ 37 ]. If
a curve representing training pattern is nearly constant in an interval, it would be
difficult to utilize a Gaussian function to approximate this constant-valued function.
The learning convergence is quick with RBF and hence has less problems when
compared to MLP, but the number of RBF neurons may become quiet large for
applications containing large number input variables.
The main motivation in designing the proposed aggregation function is to take
advantage of the merits of perceptron and radial basis processing. This neuron has
a compensatory basis function which adaptively selects the best proportion for local
and global optimization, and these proportions are later multiplied to provide a
higher-order function of aggregation. Which may later fed through a state activa-
tion function to create the final output. This kind of neuron itself looks complex in
the first instance but when used to solve a complicated problem it yields reason-
ably efficient processing as compared to conventional neurons. It is also free from
the basic problem of higher-order neurons that arises in case of large number of
inputs. The net potential of proposed neuron is weighted composition of summation
and radial basis subfunctions. Thus input aggregation proposed for this neuron is a
functional, which formulates its compensatory structure. The information processed
through subfunctions is integrated in desired proportion (
), nonlinearly
in the RSP (Rbf-Summation-Product) model. The compensatory parameters
ʳ : ʻ
ʳ
and
ʻ
specify the contribution of radial basis and summation subfunctions to take into
account the vagueness involved. With a view to achieve robust aggregation function,
the parameters
are itself made adaptable in course of training.
The product or intersection is usually inserted in the formulas where one speaks
ʳ
and
ʻ
about nonlinear operation, expressed here as a b
=
1
+
a
+
b
+
ab . The novel
neuron constructed with this operation is named as RSP. Let Z
z L ] be
the vector of input signals, Y be an output, and f C be the complex-valued activation
function defined in Eq. ( 3.3 ) . Z T
=
[ z 1 ,
z 2 ...
is transpose of vector Z and z is the complex
= w 1 m ,
w Lm is a vector of weights from input layer
conjugate of z. W m
w 2 m ...
 
Search WWH ::




Custom Search