Information Technology Reference
In-Depth Information
ing the rate at which the membrane potential is updated
on each cycle — are you still able to activate only the
left two hidden feature units? What does this tell you
about your previous results?
attractor basin
Next try to activate the ambiguous (center) input
feature, Speakers , by pressing the RunAmbig button.
One reasonable response of the network to this input
would be to weakly activate the other features associ-
ated with this ambiguous input in Hidden1 , indicating
that it cannot choose between these two possibilities.
This is impossible to achieve, however, because of the
spreading activation phenomenon.
attractor state
Figure 3.18: Diagram of attractor dynamics, where the ac-
tivation state of the network converges over settling onto a
given attractor state from any of a range of initial starting
states (the attractor basin). The points in this space correspond
(metaphorically) to different activation states in the network.
RunAmbig first with a leak value of 1.737, and then
with a leak value of 1.736.
You can see that the network does not activate the
other feature units at all with a leak of 1.737, whereas
a value of 1.736 causes all of the units in the network
to become strongly activated. The network exhibits
strongly bimodal behavior, and with only a constant
leak current to control the excitation, does not allow for
graded levels of activation that would otherwise com-
municate useful information about things like ambigu-
ity.
, !
It is useful to note that these bidirectional networks
tend to be strongly bimodal and nonlinear with respect
to small parameter changes (i.e., they either get acti-
vated or not, with little grey area in between). This is
an important property of such networks — one that will
have implications in later chapters. This bimodal non-
linear network behavior is supported (and encouraged)
by the nonlinearities present in the point neuron activa-
tion function (see section 2.5.4). In particular, the sat-
urating nonlinearity property of the sigmoidal noisy X-
over-X-plus-1 function provides a necessary upper limit
to the positive feedback loop. Also important is the ef-
fect of the gain parameter ￿ , which magnifies changes
around the threshold value and contributes to the all-or-
nothing character of these units.
Next, set the leak current to 1.79 and press the
RunFull button.
This activates both the CRT and Speakers inputs.
You should see that the activation overflows to the third
feature unit.
Finally, increase the leak from 1.79 to 1.8 and
RunFull .
Two inputs get weakly activated, but the TV unit in
the hidden2 layer does not. Thus, even with complete
and unambiguous input for the TV features, activation
either spreads unacceptably, or the network fails to get
appropriately activated.
You should have observed from these explorations
that bidirectional excitatory connectivity is a double-
edged sword; although it can do some interesting ampli-
fication and pattern completion processing, it can also
easily get carried away. In short, this type of connectiv-
ity acts like a microphone next to the speaker that it is
driving (or a video camera pointed at its own monitor
output) — you get too much positive feedback .
Go to the PDP++Root window. To continue on to
the next simulation, close this project first by selecting
.projects/Remove/Project_0 . Or, if you wish to
stop now, quit by selecting Object/Quit .
, !
3.4.4
Attractor Dynamics
The notion of an attractor provides a unifying frame-
work for understanding the effects of bidirectional ex-
citatory connectivity. As we stated previously, an at-
tractor is a stable activation state that the network set-
tles into from a range of different starting states (think
of one of those “gravity wells” at the science museum
that sucks in your coins, as depicted in figure 3.18). The
Search WWH ::




Custom Search