Digital Signal Processing Reference
In-Depth Information
Context
Weights w
i j
ar e pl aced at r andom : T he netw or k does not
simulate a XOR
Error:
her e should be
a 0 (XOR )
Backpropagation: Backward error analysis
Error F
z
= 0 - 1 =
-1
Now weight correction:
w
xz
* = w
xz
+ F
z
XOR
Truth table
Input Output
UV
Z
0 1
1
1 0
1
0 0
0
1 1
0
=1
Z
Output layer
Σ
0=
2,2>0,01 => Z=1
z
= 2,2
∗
1+ 2, 5
∗
∗
X
out
= 2,2 + (-1)
∗
1 =
1,2
Wxz =2,2
Wyz =2,5
w
yz
* = w
yz
+ F
z
∗
Y
out
= 2,5 + (-1)
∗
0 =
2,5
Error F
x
= F
z
∗
w
xz
= (-1)
∗
2,2 =
- 2,2
Hidden
layer
=1
X
=0
Y
Σ
y
= 4,6
1=
-0,5 <0 ,01 => Y=0
∗
1+(-5,1)
∗
Error F
y
= F
z
∗
w
yz
= (-1)
∗
2,5 = - 2,5
Σ
1=
0, 1> 0,01 => X= 1
x
= (-4,9)
∗
1+ 5
∗
w
ux
* = w
ux
+ F
x
∗
U
out
= -4,9 + (-2,2)
∗
1 = - 7,1
w
uy
* = w
uy
+ F
y
∗
U
out
= 4,6 + (-2,5)
∗
1 = 2,1
Wux =-4,9
Wvy =-5,1
w
vx
* = w
vx
+ F
x
∗
V
out
= 5 + (-2,2)
∗
1 = 2,8
Wuy = 4,6
Wvx =5
w
vy
* = w
vy
+ F
y
∗
1 = -7,6
This is now changing the net!
∗
V
out
= -5,1 + (-2,5)
=1
U
=1
V
Input layer
Threshold =
0, 01
Now, test or training with all possible binary patterns, possibly back propagation again, etc.
=0
Z
=0
Z
=1
Z
=1
Z
Wxz*=1,2
Wyz*=2,5
Wxz*=1,2
Wyz*=2,5
Wxz*=1,2
Wyz*=2,5
Wxz*=1,2
Wyz*=2,5
=0
X
=0
Y
=0
X
=0
Y
=0
X
=1
Y
=1
X
=0
Y
Wvy*= -7,6
Wvy*= -7,6
Wvy*= -7,6
Wvy*= -7,6
Wux*=-7,1
Wux*= -7,1
Wux*=-7,1
Wux*=-7,1
Wuy*=2,1
Wuy*=2,1
Wvx*=2,8
Wuy*=2,1
Wvx*=2,8
Wvx*=2,8
Wuy*=2,1
Wvx*=2,8
=1
U
=1
V
=0
U
=0
V
=0
U
=1
V
=1
U
=0
V
Threshold
= 0,01
Threshold
= 0,01
Threshold
= 0,01
Threshold
= 0,01
Σ
Σ
Σ
Σ
x
= (-7,1)
∗
1 + (2,8)
∗
1= - 4 ,3 < 0 ,01 = > X= 0
x
= (-7,1)
∗
1+ ( 2,8 )
∗
1= - 4,3 < 0, 01 => X =0
x
= (-7,1)
∗
1 + (2 ,8)
∗
0 = - 7,1 < 0,01 => X=0
x
= (-7,1)
∗
0 + ( 2, 8)
∗
1 = 2 ,8 > 0,01 = > X= 1
Σ
Σ
Σ
Σ
y
= (2,1)
∗
1 + (- 7, 6)
∗
1= - 5 ,5 < 0 ,01 = > Y= 0
y
= (2,1)
∗
1 + (- 7, 6)
∗
1= - 5,5 <0,01 => Y=0
y
= (2,1)
∗
1 + (-7,6 )
∗
0 = 2 ,1 > 0,01 => Y=1
y
= (2,1)
∗
0 + ( -7 ,6 )
∗
1 = -7,6 < 0,01 => Y=0
Σ
Σ
Σ
Σ
z
= (1,2)
∗
0 + (2, 5 )
∗
0 = 0 < 0 ,01 => Z=0
z
= (1,2)
∗
0+ (2 ,5 )
∗
0= 0 < 0 ,01 = > Z =0
z
= (1,2)
∗
0 + (2,5 )
∗
1 = 2 ,5 > 0,01 = > Z= 1
z
= (1,2)
∗
1 + (2,5 )
∗
0 = 1,2 > 0,01 => Z = 1
Already a successful test of error calculation : The network simulates a XOR
Illustration 283:
How a neuronal network learns
… can probably be explained in the most simple way by a “minimal” neural network with the most simple
module characteristics of the neuron. Here there is an input layer with two neurons, a hidden layer with
two neurons and an output layer with only one neuron.
This neural network is to learn to act like a logical XOR- module. What all neurons have in common is to
produce a 1 at the output if a threshold of 0,01 is exceeded, otherwise 0. At first, the weightings are picked
randomly (see above). At both inputs 1 is used.
The calculation, however, results in 1 instead of 0 (XOR): error! The correction of the weightings is now
carried out by error calculation directed backwards (back propagation). With the new weightings (lower
half of the picture) all 4 binary possibilities are tested. Result: This neural network works like an XOR-
module! This example shows above all that it is possible to realize even rule- based (logical) systems using
“learning” neural networks!
These IPA modules were provided by this learning system and even expanded and
alternated.
The good news is: to use neural networks you do not necessarily have to know how they
work! This is another difference from the usual signal processing systems dealt with in
the earlier chapters: without detailed knowledge of every component or module the “rule-
based” overall system cannot be generated. Nothing can be “left to chance”.