Information Technology Reference
In-Depth Information
It is likely that the limited recursion or subroutining
that does appear to exist in human cognition uses spe-
cialized memory systems to keep track of prior state
information. For example, rapid hippocampal learn-
ing could be very useful in this regard, as could the
frontal active memory system. Indeed, there is some
recent evidence that the frontal pole may be specifi-
cally important for subroutine-like processing (Koech-
lin, Basso, & Grafman, 1999). In addition, the same
kinds of combinations-of-conjunctive binding represen-
tations as discussed above could be useful, because they
can produce different representations for the same items
at different places within a sequence, for example.
Given that we think the cortex is organized according
to rough hierarchies of increasingly abstract representa-
tions, the abstraction solution to generalization is quite
plausible. Further, it is likely that learning will auto-
matically tend to form associations at the right level of
abstraction — in the example above, the invariant repre-
sentation will always be predictive of (correlated with)
things associated with tigers, whereas the lower-level
representations will be less so. Learning (of both the
task and model variety) is very sensitive to predictabil-
ity, and will automatically use the most predictable as-
sociations.
In addition to abstraction, distributed representations
are important for generalization because they can cap-
ture the similarity structure of a novel item with previ-
ously learned items. Here, a novel item is represented
in terms of a combination of “known” distributed fea-
tures, such that the previously-learned associations to
these features provide a basis for correct responding
to the novel item. Neurons will naturally perform a
weighted average of the feature associations to produce
a response that reflects an appropriate balance of influ-
ences from each of the individual features. We will see
many examples of this in the chapters that follow.
Although generativity has been somewhat less well
explored, it also depends on the recombination of exist-
ing outputs. For example, novel sentences are generated
by recombining familiar words. However, exactly what
drives this novel recombination process (i.e., “creativ-
ity”) has yet to be identified.
7.6.6
Generalization, Generativity, and Abstraction
How do you get dedicated, content-specific representa-
tions to appropriately recognize novel inputs (general-
ization) and produce novel outputs (generativity)? This
problem has often been raised as a major limitation of
neural network models of cognition. One important
point to keep in mind is that people are actually not
particularly good at transferring knowledge learned in
one context to a novel context (Singley & Anderson,
1989). Nevertheless, we are clearly capable of a signif-
icant amount of generalization and generativity.
One of the most important means of achieving gener-
alization and generativity is by learning associations at
the appropriate level of abstraction . For example, let's
imagine that one learns about the consequences of the
visual image corresponding to a tiger (e.g., “run away”).
Because the actual visual image in this instance could
have been anywhere on the retina, it is clear that if this
learning took place on a lower-level retinally-based rep-
resentation, it would not generalize very well to sub-
sequent situations where the image could appear in a
novel retinal location. However, if the learning took
place on a more abstract, spatially invariant representa-
tion, then images of tigers in any location on the retina
would trigger the appropriate response. This same ar-
gument applies at all different levels of representation
in the system — if you learn something such that the
same (or very similar) representation will be reactivated
by the class of instances it is appropriate to generalize
over, then generalization ceases to be a problem.
7.6.7
Summary of General Problems
It should be clear that most of the preceding prob-
lems are caused by a somewhat impoverished set of as-
sumptions regarding the nature of the representations
involved. For example, the binding problem is not as
much of a problem if there is some element of conjunc-
tivity in the representations, and generalization is fine as
long as knowledge is encoded at the proper level of ab-
stract representations. Thus, a general lesson from these
problems is that it is important to question the represen-
tational assumptions that give rise to the problem in the
first place.
Search WWH ::




Custom Search