Databases Reference
In-Depth Information
Step 2
Start
Start
Best
Best
X
X
X
X
Step 1
(b)
(a)
Figure 4.8 ANN Optimal Surface Map
training iteration, the current location takes a step in the direction of greatest
improvement. Figure 4.8b represents the state after two iterations. At step 1,
the ANN moved toward the nearest local optimum. At step 2, it continued in the
same direction, which pushed it just past that same local optimum. Where will it
go at step 3? It will move back toward the nearest local optimum, which is back
toward where it was at step 2. Without changes to the algorithm, it will continue
to vacillate around the local optimum never approaching the optimal (“best”)
location.
To overcome this problem, researchers and designers of ANN algorithms
added two adjusting parameters to the backward propagation methodology. The
first added was a learning rate . Think of the learning rate as the distance moved
at each iteration. With a higher learning rate, it is possible that the ANNwill step
far enough beyond a local optimum to move, at the next iteration, toward a
different optimum.
The second parameter added was momentum . The momentum attempts to
keep the ANN moving in the same direction as movement at the last iteration.
Momentum ranges in value between 0 and 1. It represents the portion of the
movement from the previous iteration that is added to the current iteration's
computed movement to produce a total momentum adjusted movement.
When momentum is zero, only the movement of the current iteration is
used (Figure 4.9a); when momentum is one, the full movement of the previous
iteration is included (Figure 4.9b). At 0.5, for example, half of the movement of
the previous direction is added to the current movement, thus nudging the
training in the direction of previous movement, while allowing the current
iteration to contribute its computed direction of movement (Figure 4.9c).
Search WWH ::




Custom Search