Information Technology Reference
In-Depth Information
w
y
lm
w
lm
−
w
x
mn
00
0
w
mn
−
0
w
y
w
lm
0
00
w
lm
w
z
mn
W
lm
=
lm
−
W
mn
=
w
mn
0
w
z
mn
w
y
w
mn
)
where
W
lm
=
w
lm
)
w
z
mn
)
2
2
and
W
mn
=
2
2
(
+
(
lm
)
(
+
(
The net potential of
m
th neuron in hidden layer can be given as follows:
V
m
=
w
lm
I
l
+
ʱ
m
(6.2)
l
The activation function for 3D vector-valued neuron is 3D extension of real acti-
vation function and defined as follows:
V
m
)
=
f
V
m
)
T
V
m
),
V
m
),
Y
m
=
f
(
(
f
(
f
(
.
(6.3)
Similarly,
V
n
)
=
f
V
n
)
T
V
n
),
V
n
),
V
n
=
w
mn
Y
n
+
ʱ
n
and
Y
n
=
f
(
(
f
(
f
(
.
(6.4)
m
V
n
)
=
f
V
n
)
T
V
n
),
V
n
),
Y
n
=
f
(
(
f
(
f
(
.
(6.5)
The mean square error function can be defined as:
1
N
2
E
=
|
e
n
|
(6.6)
n
In 3D vector version of back-propagation algorithm (3DV-BP) the weight update
equation for any weight is obtained by gradient descent on error function:
−
∂
T
E
∂
E
∂
E
ʔ
w
=
ʷ
−
−
∂
w
x
∂
w
y
∂
w
z
then, weights and bias in output layer can be updated as follows:
e
n
·
f
(
V
n
)
e
n
·
V
n
)
f
(
ʔʱ
n
=
ʷ
e
n
·
f
(
V
n
)
w
mn
w
x
mn
Y
m
e
n
·
f
(
V
n
)
w
mn
Y
m
Y
m
ʔ
e
n
·
V
n
)
f
(
=
ʷ
w
z
mn
w
x
mn
w
z
mn
ʔ
Y
m
Y
m
Y
m
e
n
·
f
(
V
n
)
−
Search WWH ::
Custom Search