Information Technology Reference
In-Depth Information
to the perceived cognitive “jolt” caused by the garden path effect, rather than the
need to re-parse, in more traditional accounts. In this way, the phenomenon some-
what imprecisely called “recursion” by Chomskian linguists [ 14 ] is (unsurprisingly)
enfolded in the parallel model without the addition of a special mechanism, and in
particular without the stack entailed by computational notions of recursion. Because
the mechanism proposed here is not specific to humans (indeed, it is motivated by
reference to more cognitively primitive species [ 55 ]), there is consequently no rea-
son to claim, as do Fitch et al. [ 14 ], that cognitive recursion is a defining property
of humanity. What appears as recursion in the particular style of the Chomskian
analysis is accounted for by a mathematically simpler mechanism in IDyOT.
The parsing-by-prediction process is illustrated in Fig. 7.3 . The figure shows,
centrally, a line of text being processed by IDyOT. Above and below the line are
the predictions of two generators, G1 and G2, respectively—but recall that in a real
run of IDyOT there would be many more generators than this. The predictions are
labelled arbitrarily in the figure, but it is important to note that these categories are
synthesised bottom-up and not imposed top-down. The function T Gi (per generator)
is the information-theoretic measure used to compare and select the input into the
Global Workspace. The effect modelled here is akin to that proposed by Hale [ 19 ],
differing because it is an epiphenomenon of the throttling of information entering the
Global Workspace (specifically: more information that is usual), rather than relying
on top-down detection of a change in distribution.
Our testable prediction ( about the model, as opposed to the predictions of the
model), therefore, is that IDyOT should be able to learn to segment to the same
degree as the top-down models in both music and language; and to simulate the
garden path effect, when an initial semantic interpretation is displaced by a new one.
It should be able to do these things without additional descriptive rules.
7.5.2 Lexical Ambiguity and Human-Like Misunderstanding
Given that it can segment, the IDyOT model should also be able to parse. However,
the notion of parsing here is different from the Chomskian one. Chomskian linguists
sometimes talk about a parser and/or a grammar “getting” a grammatical construct,
which we take to mean that it can be correctly parsed, presumably with something
like a shift-reduce parser. The equivalent construct in IDyOT has no notion of correct-
ness, but instead of a degree of match with observed likelihood. Thus, IDyOT is not
dependent on whole sentences, or any predefined syntactic unit: its simple inference
method runs bottom-up to combine and predict from whatever input it is given from
whatever network structure it has learned. When the result contains sufficient infor-
mation in comparison with other candidates, it is passed into the Global Workspace,
and this, as it moves up the hierarchy of groups, corresponds with the moment of
Chomskian “getting”. This process, from the morpheme level up, is illustrated in
Fig. 7.2 . Figure 7.2 b in particular illustrates IDyOT dealing with syntactic ambiguity
Search WWH ::




Custom Search