Information Technology Reference
In-Depth Information
Set env_type to COMBOS ( Apply ), and then do
View EVENTS .
You will see an EnviroView with one event and its
pattern displayed. This pattern contains the input that
will be presented to the hidden layer units. Let's first
run the default case where the 3 and 7 units are acti-
vated.
5
6
7
8
9
, !
0
1
2
3
4
Hidden
Press Run .
Figure 3.14 shows the results you should see.
, !
Question 3.6 (a) Describe what happens to the input
layer activations when digit categories 7 and 3 are ac-
tivated (be sure to note even subtle differences in acti-
vation). (b) How do you account for this result? (c)
Can you change the value of g_bar_l to enhance any
differences between the levels of activation of the ac-
tive units? Explain why this helps. (d) How might this
kind of enhancement of differences be generally useful
in cognition?
Input
Figure 3.14: Results of top-down activation of both the 3 and
7 digit categories.
to tell any difference between the two runs, since they
both produce basically the same patterns of activity on
both layers. However, you can tell that the hidden units
are clamped during CATEGS because they are all at the
same activation value, whereas this value varies when
the input images are presented. The images also have a
slightly lower activity value when driven from the digit
category units. Furthermore, you can always click on
the ext button in the network window to see which
units are being driven by external inputs (i.e., clamped).
You can now observe the effects of activating your
own combinations of digit categories.
Click with the left mouse button on any of the digit
categories to toggle the input on or off. Be sure to press
the Apply button in the environment window, which will
be highlighted, so that your changes take effect. Have
fun trying different combinations!
, !
Click on the ext button in the network window.
You should see that the hidden units, not the input
units, are being driven by external input.
This simple exercise demonstrates how bidirectional
connectivity enables information to flow, and transfor-
mations to be computed, in both bottom-up and top-
down directions. There are a number of other impor-
tant issues surrounding this phenomenon. For example,
what happens when there are different digit images that
correspond to the same categorical hidden unit? With
just a localist hidden representation, only the prototypi-
cal input (i.e., the one described exactly by the weights)
will be activated in a top-down fashion. However, in the
more realistic case of a distributed representation, lots
of different input images can be produced by activating
different combinations of hidden units.
We can get a sense of the effects of activating multi-
ple combinations of hidden units in the localist network.
Go to the PDP++Root window. To continue on to
the next simulation, close this project first by selecting
.projects/Remove/Project_0 . Or, if you wish to
stop now, quit by selecting Object/Quit .
, !
3.4.2
Bidirectional Pattern Completion
Now we will explore pattern completion in a network
with bidirectional connections within a single layer of
units. Thus, instead of top-down and bottom-up pro-
cessing, this network exhibits lateral processing. The
difference is somewhat evanescent, because the same
underlying processing mechanisms are at work in both
cases. However, laterality implies that the units in-
volved are somehow more like peers , whereas top-down
bottom-up implies that they have a hierarchical relation-
ship.
We will see in this exploration that by activating a
subset of a “known” pattern in the network (i.e., one that
Search WWH ::




Custom Search