Database Reference
In-Depth Information
bad.train(x.x,x.y);
err += nn.error();
errBad += bad.error();
}
System.
out
.println(err+"\t"+errBad);
}
After this iteration it is clear that the neural network with a hidden
layer has achieved a much better overall error than the network without
the hidden layer, which has become trapped at a relatively high error,
as shown in
Figure 11.5
. The solid line shows the error of a three-layer
neural network learning an XOR pattern. The dotted line shows the
two-layer neural network failing to learn the same pattern.
Momentum Backpropagation
This version of the algorithm uses the simplest method for computing the
change in weight at each step. A common performance improvement is to
introduce the notion of “momentum” into the computation of the change in
weight.