Information Technology Reference
In-Depth Information
6.3.1
Exploration of Generalization
35−
10−
30− 9−
Let's explore some of these ideas regarding the impor-
tance of Hebbian learning and inhibitory competition
for generalization performance. We will explore one
simple example that uses the oriented lines environment
explored previously in the model learning chapter.
8−
25−
Unq Pats
7−
6−
20−
5−
15−
4−
10−
3−
Gen Cnt
2−
5−
1−
0−
Cnt SSE
0−
Open
project model_and_task.proj.gz in
chapter_6 .
Notice that the network now has an output layer —
each of the ten output units corresponds with a horizon-
tal or vertical line in one of the five different positions
(figure 6.3).
The task to be learned by this network is quite simple
— activate the appropriate output units for the combi-
nation of lines present in the input layer. This task pro-
vides a particularly clear demonstration of the general-
ization benefits of adding Hebbian learning to otherwise
purely error-driven learning. However, because the task
is so simple it does not provide a very good demonstra-
tion of the weaknesses of pure Hebbian learning, which
is actually capable of learning this task most of the time.
The next section includes demonstrations of the limita-
tions of Hebbian learning.
The model_task_ctrl control panel contains
three learning parameters in the lrn field. The first
two subfields of lrn control the learning rate of
the network weights ( lrate ) and the bias weights
( bias_lrate ). The bias_lrate is 0 for pure Heb-
bian learning because it has no way of training the bias
weights, and is equal to lrate for error-driven learn-
ing. The third subfield is the parameter hebb ,which
determines the relative weighting of Hebbian learning
compared to error-driven learning (equation 6.1). This
is the main parameter we will investigate to compare
purely Hebbian (model) learning ( hebb=1 ), purely
error-driven (task) learning ( hebb=0 ), and their com-
bination ( hebb between 0 and 1). Because we have to
turn off the learning in the bias weights when doing pure
Hebbian learning, we will use the learn_rule field
to select the learning rule, which sets all of the param-
eters appropriately. Let's begin with pure error-driven
(task) learning.
, !
0 102030405060708090100
Figure 6.4: Graph log, showing count of trials with training
errors (Cnt SSE, solid line, red in actual network), number of
lines distinctly represented (Unq Pats, dashed, yellow), and
generalization error (Gen Cnt, dotted, green — note that this
is on the same scale as the training SSE).
Press Step in the control panel.
This is the minus phase of processing for the first
event, showing two lines presented in the input, and un-
doubtedly the wrong output units activated. Now let's
see the plus phase.
, !
Press Step again.
The output units should now reflect the 2 lines present
in the input (position 0 is bottom/left).
, !
You can continue to Step through more trials.
The network is only being trained on 35 out of the 45
total patterns, with the remaining 10 reserved for testing
generalization. Because each of the individual lines is
presented during training, the network should be able to
recognize them in the novel combinations of the testing
set. In other words, the network should be able to gen-
eralize to the testing items by processing the novel pat-
terns in terms of novel combinations of existing hidden
representations, which have appropriate associations to
the output units.
, !
After you tire of Step ping, open up a training graph
log using View , TRAIN_GRAPH_LOG , and then press
Run . You should turn off the Display button on the
network, and watch the graph log.
As the network trains, the graph log is updated ev-
ery epoch with the training error statistic, and every 5
epochs with two important test statistics (figure 6.4). In-
stead of using raw SSE for the training error statistic, we
will often use a count of the number of events for which
there is any error at all (again using the .5 threshold on
, !
Set learn_rule to PURE_ERR , and Apply .
Let's see how the network is trained.
, !
Search WWH ::




Custom Search