Database Reference
In-Depth Information
return layer(units,Activation.tanh());
}
To make a prediction, the feed method iteratively applies each layer to the
output of the previous layer or the input itself:
public double[] feed(double[] x) {
for(Layer l : layers)
x = l.feed(x);
return x;
}
Training with Backpropagation
Of course, a newly constructed feed-forward network is useless because
it has not been “trained” on any data. In fact, as currently implemented,
the output vector is always zeroes because none of the connections in the
network have any weight.
The most popular method for training a multilayer feed-forward neural
network is the backpropagation algorithm . This algorithm is a relative of
the generalized least squares algorithm used to fit the regression models
earlier in the chapter. It works by minimizing the sum-of-squares error
between the target output values and the output values observed after
feeding the input values through the network. This is recorded in an error
array that is added to the Layer implementation. For efficiency, the Layer
implementation also maintains the total error for this layer:
double [] err;
double E;
public double [] errors() { return err; }
This array is initialized in the constructor to have the same size as the value
array:
public Layer(int units,int inputs,Activation
fn,boolean bias) {
this.fn = fn;
v = new double[units];
w = new double[units][];
for(int i=0;i<v.length;i++) w[i] = new
Search WWH ::




Custom Search