Information Technology Reference
In-Depth Information
L ) to summation part of m th RSP neuron ( m
M ) and W RB
=
...
=
...
=
( l
1
1
w RB
Lm is a vector of weights from input layer to radial basis part of
m th RSP neuron. All weights, bias, and input-output signals are complex numbers.
By convention w lm is a weight that connects l th neuron to m th neuron. The net
potential of this neuron is defined by following aggregation function:
m
w RB
2 m
w RB
1 m ,
...
exp
2
Z T
W RB
m
W m
ʩ m (
z 1 ,
z 2 ...
z L ) =
ʳ m ×
Z
ʻ m ×
,
(4.1)
where Z
W R m
= Z
W R m × Z
W R m . Here superscript
2
represents
the matrix complex conjugate transposition. Output of the neuron may be expressed
as: Y m =
f C m (
z 1 ,
z 2 ...
z L ))
4.3.3 Learning Rules for Model-1
A multilayer network can be constructed by new neurons similar to network of
conventional neurons. The task of learning is to tune the parameters of the operator
f and to model the underlying parametric relationship between the inputs and the
output through the weight parameter W . We assume that the neuron observes L
input-output pairs
(
z 1 ,
y 1 ), . . . (
z n ,
y n )
, and generates a function space that maps the
vector space Z
.
Consider a commonly used three-layer network (L-M-N). First layer has L inputs,
second layer has M proposed neurons and the output layer consists of N conventional
neurons. Definitely this network is used in all the applications presented in this topic
based on SRP or C RSP neuron model. Let
(
z
Z
)
, into the corollary responding output space Y
(
y
Y
)
be the learning rate and f be
ʷ ∈[
0
,
1
]
=
+
the derivative of function f . w 0 is a bias and z 0
1
j is the bias input, where
=
1 is an imaginary unity. The weight update rules for various parameters of
a considered feedforward network of the C RSP neuron are given here:
Let V m
j
be the net potential of m th RSP neuron in the hidden layer then from
Eq. ( 4.1 )
+ ʳ m exp
2
V m
= ʻ m W m
Z T
W RB
m
Z
ʳ m exp
2
+ ʻ m W m
Z T
W RB
m
Z
+
w 0 m z 0
(4.2)
This net internal potential of RSP neuron may also be expressed termwise as follows:
V m
V ˀ 1
m
V ˀ 2
m
V ˀ 1
m
V ˀ 2
m
=
+
+
+
w 0 m z 0
(4.3)
From Eq. ( 3.3 ) , the output of a neuron in the hidden layer can be expressed as
 
Search WWH ::




Custom Search