Database Reference
In-Depth Information
double[inputs];
if(bias)
bW = new double[units];
err = new double[units];
}
This error array is updated during the training phase by propagating the
error from the output layer backward to the input layer. This backward
propagation gives the algorithm its name. It first computes the error for
each unit in the layer as the difference between the expected output and the
activationofeachunitmultipliedbythederivativeoftheactivationfunction.
For the output layer, the difference, which is passed in the s array, is the
difference between the training value and the output of the feed method in
the backprop method of the Layer class:
public double [] backprop( double [] s) {
double [] out = new double [w[0].length];
E = 0;
for ( int i=0;i<v.length;i++) {
err[i] = fn.df(v[i])*s[i];
E += err[i]*err[i];
To produce a difference array to propagate to the next Layer , the current
Layer computes a weighted sum of its own error values. As shown in
the following implementation, this essentially reverses the process of the
feed-forward network. This array is then returned so that it may be used as
the input to the backprop method of the next Layer :
double [] W = w[i];
for ( int j=0;j<W.length;j++) out[j] += W[j]*err[i];
}
return out;
}
When the errors have been propagated to each of the layers, the weights
themselves are updated. Each weight is updated according to the delta rule,
which states that the change in the weight between two units is proportional
to the production of the current unit error, the derivative of the unit's
activation function, and the activation value of the input unit. The error
Search WWH ::




Custom Search