Databases Reference
In-Depth Information
The interactive build begins after the automatic completion of a single
epoch pass using the learning rate and momentum represented by the red
dot in the grid - a very high learning rate paired with a low momentum. The
initial “Training Error” is shown above the “Training Progress Plot”. Normally
the initial error will be high as it is dependent on the randomweights assigned to
begin the process. It will improve as training progresses.
Each time you press the mouse button down while over the grid, training
resumes using the learning rate and momentum corresponding to the current
mouse location. It continues until the button is released. While training, you
may drag the red dot to other locations within the grid to change the learning
rate and momentum on the fly. The progress plot is updated to show the current
training error. When the mouse button is released, a checkpoint is created.
Checkpoints record the current state of the ANN and are depicted on the
progress plot by small red circles. They allow you to go back to previous
checkpoints to try training in a different direction. The interactive process
allows you to visually search for the right combination of learning rate and
momentum applicable to the dataset.
While holding the mouse down, slowly drag the red dot down from the
upper left corner toward the bottom center.
As you drag, you should see the training error progress plot begin to drop.
While training, it is a good idea to release then press the mouse button every few
seconds to set checkpoints that you may want to return to. Note: There will
always be an initial checkpoint available which is set
immediately after
completing the first training epoch.
As you train, keep in mind the error rates achieved by the decision tree and
SVM classifiers (2.7% and 2.0% respectively). If you cannot push the ANN
error below these values, it probably is not a useful model.
Continue the training process until the classification error flattens (0.7%).
Click “Accept”.
After processing has completed, drag each new model up to an available
display, selecting “Confusion Matrix” as the viewer.
How do these two new classifiers (SVM and ANN) compare in terms of error
rate with the decision tree? The ANN misclassified only one flower (0.7%),
while the SVM and decision tree misclassified three (2.0%) and four (2.7%)
respectively. With just 150 observations, it is not possible to definitively say that
the ANN Iris classifier is better than SVM or decision tree classifiers. Later
in the chapter we will revisit issues of model performance.
Close all open confusion matrices.
Search WWH ::




Custom Search