Database Reference
In-Depth Information
Learning an Exclusive-Or Pattern
One of the interesting examples of neural network learning used to
motivate the need for hidden layers is the ability to learn the
exclusive-or (XOR) pattern. In these examples, the training set has two
inputs and a single output. If any of the two inputs is active, the output
is also active. If neither or both of the inputs are active, the output is
inactive.
As it happens, many learning algorithms, including a neural network
without hidden layers, cannot learn this pattern. The reason is that
many classification techniques require that the classes be linearly
separable to solve the problem, and the XOR pattern is not linearly
separable.
Adding a hidden layer to the neural network allows it, essentially, to
further subdivide the input space and learn a network that can
reproduce the exclusive-or pattern from inputs.
This example uses two neural networks—one with a hidden layer and
one without—that are both trying to learn an XOR pattern. The
networks, nn and bad respectively, are easy to define in the framework:
NeuralNetwork nn =
NeuralNetwork. build ().inputs(2).layer(3).layer(1);
NeuralNetwork bad=
NeuralNetwork. build ().inputs(2).layer(1);
nn.initialize();
bad.initialize();
Both networks are then trained for 1,000 iterations on all possible XOR
inputs and outputs and the error recorded:
for ( int i=0;i<1000;i++) {
double err = 0.0;
double errBad = 0.0;
for (Obs x : xorData) {
nn.train(x.x,x.y);
Search WWH ::




Custom Search