Information Technology Reference
In-Depth Information
if
∂
0
)
∂
(w
lm
)
E
(
t
−
1
)
∂
(w
lm
)
=
∂
E
(
t
×
then
sign
)
∂
(w
lm
)
∂
E
(
t
{
(w
lm
(
t
))
=−
(
ij
(
t
))
(w
lm
(
t
+
1
))
=
(w
lm
(
t
))
+
(w
lm
(
t
))
}
For Imaginary part of weight:
if
∂
0
then
)
∂
(w
lm
)
(
−
)
∂
(w
lm
)
>
∂
(
E
t
1
E
t
×
))
×
μ
+
,
(
max
))
{
(
lm
(
t
))
=
min
(
(
lm
(
t
−
1
sign
)
∂
(w
lm
)
∂
E
(
t
(w
lm
(
t
))
=−
(
ij
(
t
))
(w
lm
(
t
+
1
))
=
(w
lm
(
t
))
+
(w
lm
(
t
))
}}
if
∂
0
then
)
∂
(w
lm
)
E
(
t
−
1
)
∂
(w
lm
)
<
∂
E
(
t
×
))
×
μ
−
,
(
min
))
{
(
lm
(
t
))
=
max
(
(
lm
(
t
−
1
if
(
E
(
t
)>
E
(
t
−
1
))
then
)
∂
(w
lm
)
=
∂
E
(
t
(w
lm
(
t
+
1
))
=
(w
lm
(
t
))
−
(w
lm
(
t
−
1
))
and
0
}
if
∂
=
0
{
(w
lm
(
t
))
=−
sign
)
∂
(w
lm
)
E
(
t
−
1
∂
E
(
t
)
∂
(w
lm
)
)
∂
(w
lm
)
∂
E
(
t
×
then
(
ij
(
t
))
(w
lm
(
t
+
1
))
=
(w
lm
(
t
))
+
(w
lm
(
t
))
}}
t
=
t
+
1
}
Until(converged)
The update values and the weights are changed every time when a new training
set is presented. All update values (
lm
) are initialized to
0
. The initial update
value,
0
, is selected in a reasonably proportion to the size of initial weights. In
order to prevent the weights from becoming too small and too large, the range of
update value has been restricted to a minimum limit (
min
) and maximum limit
(
max
). In experiments, it had been throughly seen that by setting these update value
quiet small, one could obtain a smooth learning process. The choice of decrement
factor
μ
−
=
μ
+
=
2 generally yields good results. It was
also observed that small variation in these values did neither improve nor deteriorate
learning process.
0
.
5 and increment factor
1
.
Search WWH ::
Custom Search