Information Technology Reference
In-Depth Information
ulations that allow readers to undertake their own ex-
plorations of the material presented in the text. An im-
portant and unique aspect of this topic is that the ex-
plorations include a number of large-scale simulations
used in recent original research projects, giving students
and other researchers the opportunity to examine these
models up close and in detail.
In this chapter, we present an overview of the basic
motivations and history behind computational cogni-
tive neuroscience, followed by an overview of the sub-
sequent chapters covering basic neural computational
mechanisms (part I) and cognitive phenomena (part II).
Using the neural network models in this topic, you will
be able to explore a wide range of interesting cognitive
phenomena, including:
ply memorizing pronunciations — instead, it learns
the complex web of regularities that govern English
pronunciation. And, by damaging a model that cap-
tures the many different ways that words are repre-
sented in the brain, you can simulate various forms
of dyslexia.
Semantic representation: You can explore a network
that has “read” every paragraph in this textbook and
in the process acquired a surprisingly good under-
standing of the words used therein, essentially by not-
ing which words tend to be used together or in similar
contexts.
Task directed behavior: You can explore a model of
the “executive” part of the brain, the prefrontal cor-
tex , and see how it can keep us focused on perform-
ing the task at hand while protecting us from getting
distracted by other things going on.
Deliberate, explicit cognition: A surprising number
of things occur relatively automatically in your brain
(e.g., you are not aware of exactly how you trans-
late these black and white strokes on the page into
some sense of what these words are saying), but you
can also think and act in a deliberate, explicit fash-
ion. You'll explore a model that exhibits both of these
types of cognition within the context of a simple cat-
egorization task, and in so doing, provides the begin-
nings of an account of the biological basis of con-
scious awareness.
Visual encoding: A neural network will view natural
scenes (mountains, trees, etc.), and, using some basic
principles of learning, will develop ways of encoding
these visual scenes much like those your brain uses
to make sense of the visual world.
Spatial attention: By taking advantage of the interac-
tions between two different streams of visual process-
ing, you can see how a model focuses its attention
in different locations in space, for example to scan a
visual scene. Then, you can use this model to sim-
ulate the attention performance of normal and brain-
damaged people.
Episodic memory: By incorporating the structure of
the brain area called the hippocampus, a neural net-
work will become able to form new memories of ev-
eryday experiences and events, and will simulate hu-
man performance on memory tasks.
Working memory: You will see that specialized bio-
logical mechanisms can greatly improve a network's
working memory (the kind of memory you need to
multiply 42 by 17 in your head, for example). Fur-
ther, you will see how the skilled control of working
memory can be learned through experience.
Word reading: You can see how a network can learn
to read and pronounce nearly 3,000 English words.
Like human subjects, this network can pronounce
novel nonwords that it has never seen before (e.g.,
“mave” or “nust”), demonstrating that it is not sim-
1.2
Basic Motivations for Computational
Cognitive Neuroscience
1.2.1
Physical Reductionism
The whole idea behind cognitive neuroscience is the
once radical notion that the mysteries of human thought
can be explained in much the same way as everything
else in science — by reducing a complex phenomenon
(cognition) into simpler components (the underlying bi-
ological mechanisms of the brain). This process is just
reductionism , which has been and continues to be the
standard method of scientific advancement across most
fields. For example, all matter can be reduced to its
atomic components, which helps to explain the various
Search WWH ::




Custom Search