Information Technology Reference
In-Depth Information
=
...
( n
N ) proposed neurons, respectively. All weights, bias and input-output
signals are complex numbers. Let Z
1
=
[ z 1 ,
z 2 ...
z L ] be the vector of input signals.
w Lm ] be the vector of weights between input layer to m th
Let W m =
[ w 1 m ,
w 2 m ...
hidden neuron, while W n
w Mn ] be the vector of weights between
hidden layer to n th output neuron. w 0 is the bias weight and z 0 is the bias input.
From Eqs. ( 3.3 ) and Eq.( 4.26 ), the output of any neuron in the hidden layer can be
expressed as:
=
[ w 1 n ,
w 2 n ...
L
1 / d
w lm z l
Y m =
f
( (
V m )) +
j
×
f
( (
V m ))
where V m =
(4.32)
l
=
0
Similarly, the output of each neuron in output layer can be expressed as:
M
1 / d
w mn Y m
Y n =
f
( (
V n )) +
j
×
f
( (
V n ))
where V n =
(4.33)
m
=
0
Update equation for learning parameters in output layer:
(4.34)
V n ( 1 d ) Y m d
d
f
f
ʔ
w mn =
(
e n )
( (
V n ) +
j
× (
e n )
( (
V n )
The update equation for learning parameters between input and hidden layer is as
follows. Let AT mn and VT mn are common terms,
AT mn = (
A 1 mn A 2 mn +
A 3 mn A 4 mn )
VT mn = (
A 3 mn A 2 mn
A 1 mn A 4 mn )
Y m
Y m
V n
V n
A 1 mn =
( Y m ) +
( Y m )
A 2 mn =
( V n ) +
( V n )
Y m
Y m
V n
V n
A 3 mn =
( Y m )
( Y m )
A 4 mn =
( V n )
( V n )
2
1
V n
f
f
and
˒ lm =
(
e n )
( (
V n )) (
w mn ) + (
e n )
( (
V n )) (
w mn )
n
f
VT mn
f
×
( (
V m ))
AT mn +
j
×
( (
V m ))
j
f
f
+
(
e n )
( (
V n )) (
w mn ) (
e n )
( (
V n )) (
w mn )
f
VT mn
f
×
( (
V m ))
AT mn +
j
×
( (
V m ))
(4.35)
V m ( 1 d )
ʷ
Nd
d
ʔ
w lm =
(
z l )
˒ lm
(4.36)
2
|
Y m |
 
Search WWH ::




Custom Search