Information Technology Reference
In-Depth Information
neural network perspective have recently confirmed this
possibility in the case of semantic priming (Joordens &
Becker, 1997).
Also in the domain of memory, we can now under-
stand in terms of computational principles why the brain
should separate out the rapid learning of arbitrary infor-
mation from the slow incremental learning of semantic
and procedural information (i.e., to avoid a tradeoff).
This perspective can considerably deepen our under-
standing of the nature of memory in the brain in ways
that purely verbal labels simply cannot.
In addition, we have seen in our explorations how
the neural network approach can have significant impli-
cations for neuropsychological interpretation, for mak-
ing inferences about normal function from people with
brain damage (see also Farah, 1994). For example,
Posner et al. (1984) used a simple box-and-arrow pro-
cess model and the effects of parietal lobe damage in
attentional cuing tasks to argue that the parietal cor-
tex was responsible for “disengaging” attention. In
contrast, we explored a model (based on that of Co-
hen et al., 1994) that showed how these effects (and
other data on parietal lobe function) can be more plau-
sibly explained within the basic principles of computa-
tional cognitive neuroscience, without any specific “dis-
engage” mechanism (chapter 8).
In chapter 10, we saw that reading deficits following
brain damage (dyslexia) can have complex and some-
what counterintuitive properties based on the premorbid
division of labor over different processing pathways.
This division of labor can be explained based on prin-
ciples of learning in neural networks, and the resulting
model provides a good fit to available data. Account-
ing for these data within a standard modular frame-
work would require complex and improbable patterns
of damage.
For example, one of the basic principles empha-
sized in this text is multiple constraint satisfaction. Al-
though this principle can be expressed relatively simply
in mathematical and verbal terms, the way that this pro-
cess actually plays out in an implemented model can be
very complex, capturing the corresponding complexity
of processing in the interactive brain. Without a firm
mechanistic basis, the principle of multiple constraint
satisfaction might come across as vague handwaving.
The sentence gestalt model from chapter 10 provides
an instantiation of the complex idea that rich, overlap-
ping distributed representations can capture sentence
meaning and syntactic structure. When relatively sim-
ple representational structures (e.g., hierarchical trees)
have been used to try to achieve insight into the na-
ture of sentence-level representations, they always seem
to fall short of capturing the rich interdependencies be-
tween semantics and syntax, among other things. Fur-
thermore, if one were to simply postulate verbally that
some kind of magical distributed representation should
have all the right properties, this would likely come off
as mere optimistic hand-waving. Thus, one needs to
actually have an implemented model that demonstrates
the powerful complexity of distributed representations.
These complex representations also require the use of
sophisticated learning procedures, because they would
be nearly impossible to hand-code.
Another example of the benefits of harnessing the
complexity of distributed representations comes from
the language models that can represent both regular and
exception mappings using the same set of distributed
representations. Previously, researchers were unable
to conceive of a single system capable of performing
both kinds of mappings. Similarly, the object recogni-
tion model from chapter 8 exhibits powerful and gen-
eralizable invariant recognition abilities by chaining to-
gether several layers of transformations — the invari-
ance transformation is not a simple, one step process,
and this was difficult to imagine before the advent of
network models such as that of Fukushima (1988).
The semantic representations model from chapter 10
(based on the work of Landauer & Dumais, 1997)
showed that word co-occurrence statistics contain a sur-
prising amount of semantic information. In this exam-
ple, the complexity is in the environment, because ex-
12.5.2
Models Deal with Complexity
Although it is convenient when we can explain nature
using simple constructs, this is not always possible, es-
pecially in a system as complex as the brain. One major
contribution of the computational approach is to pro-
vide a means of implementing and validating complex
explanatory constructs.
Search WWH ::




Custom Search