Information Technology Reference
In-Depth Information
so that it cannot rely on external biasing inputs. Putting
this somewhat more generally, one can afford to acti-
vate related information when the original information
is externally available, but when required to accurately
maintain the original information itself, one does not
want to activate related information because of the risk
of losing track of the original information.
For example, if one sees smoke, it is reasonable to
activate the representation of fire as an inference based
on a visible input, as long as this visible input is al-
ways present to constrain processing and clearly delin-
eate perceptual truth from inference. However, if one
has to actively maintain a representation of smoke in the
absence of further sensory input, this kind of spreading
activation of related representations can be problematic.
In essence, one wants the active maintenance system
to serve like an external input — it should veridically
maintain information that is used to constrain process-
ing elsewhere. Thus, one can either see or remember
seeing smoke and use either the sensory input or the
actively maintained smoke information to infer fire, but
one should not then either hallucinate or falsely remem-
ber seeing fire (note that this problem gets much worse
when the inferences are less certain than smoke ! fire).
The distinction between inferential processing and
active maintenance points to a tradeoff, in that one
wants to have spreading activation across distributed
connections for inference, but not for active mainte-
nance. This is one reason why one might want to have
a specialized system for active maintenance (i.e., the
frontal cortex), while the generic (posterior) cortex is
used for inference based on its accumulated semantic
representations, as discussed further in section 9.5.
In what follows we explore the impact of connectivity
and attractor dynamics on active maintenance by first
examining a model where there are a set of features that
can participate equally in different distributed represen-
tations. This model effectively has no attractors, and we
will see that it cannot maintain information over time
in the absence of external inputs — the activation in-
stead spreads across the distributed representations, re-
sulting in a loss of the original information. When we
introduce distributed representations that sustain attrac-
tors, active maintenance succeeds, but not in the pres-
ence of significant amounts of noise — wider attractor
Figure 9.16: Attractor states (small squares) and their basins
of attraction (surrounding regions), where nearby activation
states are attracted to the central attractor state. Each stable
attractor state could be used to actively maintain information
over time. Note that the two-dimensional activation space
represented here is a considerable simplification of the high-
dimensional activation state over all the units in the network.
their activation (Braver, Cohen, & Servan-Schreiber,
1995; Dehaene & Changeux, 1989; Zipser, Kehoe, Lit-
tlewort, & Fuster, 1993). One can think of the effects
of these recurrent connections in terms of an attractor ,
where the activation pattern of the network is attracted
toward a stable state that persists over time as discussed
in chapter 3 (figure 9.16). An attractor is useful for
memory because any perturbation away from that acti-
vation state is pulled back into the attractor, allowing in
principle for relatively robust active maintenance in the
face of noise and interference from ongoing processing.
The area around the attractor where perturbations are
pulled back is called the basin of attraction .Forrobust
active maintenance, one needs to have attractors with
wide basins of attraction, so that noise and other sources
of interference will not pull the network out of its attrac-
tor. When there are many closely related representa-
tions linked by distributed connections, the basin of at-
traction around each representation is relatively narrow
(i.e., the network can easily slip from one representation
into the next). Thus, densely interconnected distributed
representations will tend to conflict with the ability to
maintain a specific representation actively over time.
It is important to understand why active maintenance
specifically demands wide attractor basins where on-
line processing may not. During on-line processing
(e.g., of sensory inputs), there is external input that can
bias activation states in an appropriate way. In contrast,
we assume that the actively maintained representation
is solely responsible for the information it represents,
Search WWH ::




Custom Search