Information Technology Reference
In-Depth Information
conscious and can have both “explicit” and “implicit”
representations.
However, there is reason to believe that the intuitive
notion captured by the term “declarative,” that con-
sciousness is strongly associated with language, also
has some validity. Specifically, language input/output
pathways become strongly associated with so many
other internal representations that they can exert con-
siderable influence over the general state of the sys-
tem, making them likely to be within conscious aware-
ness according to our working definition (e.g., when
someone says something to you, the words are likely
to strongly influence your conscious state).
icated, content-specific representations. As we indi-
cated earlier, there is a tradeoff along this flexibility-
specialization dimension, and it appears that the brain
has generally opted for the knowledge-dependency ben-
efits of content-specific representations. Thus, the chal-
lenge posed by these problems is to understand how
some measure of flexibility can emerge from within the
context of a system with these knowledge-dependent
representations.
One general category of approaches to the follow-
ing problems has been to try to implement structured,
symbolic-style representations in neural-like hardware
(e.g., Touretzky, 1986; Hummel & Biederman, 1992;
Hummel & Holyoak, 1997; Smolensky, 1990; Shastri &
Ajjanagadde, 1993). Most of these models adopt a dy-
namic temporal binding mechanism and therefore use
mechanisms that go beyond the standard integration of
weighted activation signals that we use in the models in
this topic. The appeal of such models is that their rep-
resentations transparently exhibit the kinds of flexibility
and structure that are characteristic of symbolic models
(e.g., binding is explicitly achieved by a binding mecha-
nism, and hierarchical representations are literally hier-
archical). The limitations of such models are also sim-
ilar to the limitations of symbolic models — learning
mechanisms for establishing the necessary structured
representations and the systems that process them are
limited at best, and it is unclear how the needed mecha-
nisms relate to known biological properties.
We do not think that the advantages of the structured
models outweigh their disadvantages — there are rea-
sonable solutions to the following problems that are
more consistent with the basic set of principles devel-
oped in this text. Whereas the structured model solu-
tions to these problems provide formal and transparent
solutions, the solution we generally advocate relies on
the emergent powers of complex distributed representa-
tions across many different brain areas. As we men-
tioned previously, the following problems often arise
because of the pervasive assumption that a single canon-
ical representation must satisfy all possible demands —
if one instead considers that a distributed collection of
different kinds of representations can work together to
satisfy different demands, the problem disappears.
7.6
General Problems
We next address a number of general problems that
arise from the general functional principles described
above. All too often, people tend to leap to the con-
clusion that because neural networks exhibit some kind
of problem, they are somehow bad models of cogni-
tion. A classic example of this can be found in the
case of catastrophic interference , where McCloskey
and Cohen (1989) found that generic neural networks
suffered much more interference from simple sequential
list learning than humans did. This led them to conclude
that neural networks were not good models of cogni-
tion. However, McClelland et al. (1995) showed that
this failure actually tells us something very important
about the way the brain works and helps to make sense
of why there are two fundamentally different kinds of
memory systems (the cortex and the hippocampus, as
described previously and in chapter 9).
It is also important to emphasize that in many cases
these problems actually reflect documented limitations
of human cognition. Thus, instead of taking some
kind of “optimality” or “rational analysis” approach that
would argue that human cognition is perfect, we sug-
gest that instead cognition reflects a number of tradeoffs
and compromises. The fact that neural network models
seem to provide useful insight into the nature of these
human cognitive limitations is a real strength of the ap-
proach.
Many of the following problems have to do with
the lack of flexibility resulting from the use of ded-
Search WWH ::




Custom Search