Information Technology Reference
In-Depth Information
in a given modality to match against. We hypothesise that more generators would
deploy in a given area as the amount of information in the relevant signal increases;
this might account for the cognitive “jolt” of attention reallocation experienced when
one is consciously focused on one stimulus and another forcibly intervenes: this is
the subjective effect of sudden and ill-prepared reassignment of generators.
Entry into the Global Workspace broadly corresponds with the assignment of
attention (construed as processing power) to the chunk of perceived input thus pro-
duced. As in language experiments on parsing by competitive chunking [ 40 , 44 ],
this breaks linguistic sequences into statistically coherent groups, which tend to
correspond with semantically coherent sub-phrases, though the chunks do not nec-
essarily correspond with traditional linguistic categories. Once a chunk has entered
the Global Workspace, it is also added to the memory, and so becomes available to
the generators for prediction. This generates a positive feedback loop in which the
chunks inform the statistical model that in turn causes chunking, reinforcing it, and
leading to our first prediction, in Sect. 7.5.1 , below.
7.4.2 Representation, Memory and Prediction
Thus far, we have discussed chunks and sequences, but we have not specified the
detail: chunks and sequences of what? The reason for this is that the representation
formalism proposed in IDyOT must be understood in a way which is intimately
related with the chunking process, described above.
The key to the representation of IDyOT is that all percepts are represented in
multiple, statistically related ways. Since the architecture is centrally focused on
sequence, memory is expressed in this way. Each sequence is statistically linked to
a lower level set of sub-sequences composing it, and to a higher level set of super-
sequences categorising its chunks. The easiest way to understand this is to think in
terms of language. Given a lexicon of English words, we begin with a simple sentence
such as
The horse raced past the barn.
At a naïve level of representation, this sentence would appear as a sequence of 6
symbols, and at each point in the sequence (i.e., at each word boundary) there is a
distribution, computed from the context so far and a background model, predicting
the next word, just as in most statistical parsing approaches. In IDyOT's theory, there
is one difference at this level: as each input word appears, rather than simply taking
it as given, we match it against the symbols IDyOT predicted, using a continuous
similarity metric that interacts with the distribution, as described in Sect. 7.4.3 .Inthis
way, expectation governs (and selects) what is heard, and when there is not a clear
winner, misunderstanding between similar symbols can arise, exactly as in humans.
However, IDyOT's memory does not consist only of sequences of unstructured
symbols. The design of the system is intended to capture the full stack of capa-
bilities from audio processing up to the Global Workspace, though we expect to
Search WWH ::




Custom Search