Information Technology Reference
In-Depth Information
Fig. 1. Experimental design (schematic). Each trial comprises three phases: stim-
ulus presentation, motor response, and reinforcement. Firstly, a fractal object appears,
surrounded by four response options (grey discs). Secondly, the observer reacts by
pressing the key that corresponds to one response option (outlined disk). Thirdly, a
colour change of the chosen option provides reinforcement (green if correct, red if
incorrect). [5]
human observers viewed highly distinguishable, fractal objects and learned to
select one of four possible motor responses for each object. Some objects were
consistently preceded by specific other objects, while other objects lacked such
temporal context.
Observers were instructed to learn to respond 'correctly' to each fractal object.
It was explained that, for each fractal object, one of the four possible responses
was 'correct', while the other three responses were 'incorrect'. Observers were
told that they had to become familiar with and learn to recognise each fractal
object and that they had to learn the 'correct' response of each object by trial
and error (Fig. 1). They were further told that there was no pattern or system
that would enable them to predict which response a particular fractal object
required. No mention of or reference to the sequence of trials and fractal objects
was made.
Sequences contained eight objects and were either maximally deterministic or
maximally random. In the deterministic sequence, the eight objects always ap-
peared in the same order, so that preceding objects were just as predictive about
the correct response in the current trial as the current object. In the random
sequence, each object followed every other object with equal probability. Thus,
preceding objects provided almost no information about the correct response in
the current trial.
Observers quickly understood the existence and nature of the two types of
sequences (even though the instructions had been silent on this point). These
findings show that temporal context significantly accelerates conditional asso-
ciative learning. Further details about this and four additional experiments have
been reported in [5]. Further, a reinforcement learning model was introduced
in [5] and a neural network model in [3]. Both models aim to investigate the
implicit learning of the temporal context. In contrast to these approaches, we
 
Search WWH ::




Custom Search