Digital Signal Processing Reference
In-Depth Information
¢ () =+ ()
(
)
(
()
)
fx
1
fs
1
-
fs
Then
(
)
(
(
)
)
(
(
)
) =-
e 0
=-
0
0 894
.
1
+
tanh 1.44
1
-
tanh 1.44
0 18
.
Similarly, e 1 =
0.399. Based on this output error, the contribution to the error by each
hidden layer node is to be found. The weights are then adjusted based on this error
using
D
we x
kj
=h
k
j
where
can
cause instability, and a very small one can make the learning process much too slow.
Then
h
is the network learning rate constant, chosen as 0.3. A large value of
h
= () -
(
)(
) =-
D
w 00
0 3
.
0 18
.
0 664
.
0 036
.
Similarly,
D
w 01 =-
0.046,
D
w 10 =
0.08, and
D
w 11 =
0.103. The error associated with the
hidden layer is
1
= () Â 0
ef
s ew
j
j
k
kj
k
Then
{
} =-
(
()
)
(
()
)
(
)(
) + (
)(
)
e 0
=+
1
tanh 0.8
1
-
tanh
08
.
-
018 10
.
.
0399 04
.
.
0011
.
Similarly, e 1
=-
0.011. Changing the weights between layers i and j ,
D
we x
ji
=h
j
i
Then
= () -
(
)( ) =-
D
w 00
0 3
.
0 011 1
.
0 0033
.
Similarly,
0. This
gives an indication of by how much to change the original set of weights chosen.
For example, the new set of coefficients becomes
D
w 01 =-
0.0033,
D
w 02 =
0,
D
w 10 =-
0.0033,
D
w 11 =-
0.0033, and
D
w 12 =
ww w
00
=+
D
=-
0 5
.
0 0033
.
=
0 4967
.
00
00
and w 01
=
0.2967, w 02
=
0.1, and so on.
Search WWH ::




Custom Search