Information Technology Reference
In-Depth Information
The various manipulations used in the model and script conditions were not con-
sistent, however, and their exact contributions could not be specified. In the model
condition, for example, learners (a) overheard model collaboration, (b) viewed ani-
mation clips, and were prompted by the experimenter (c) to self-explain, and (d) to
ask questions. In the script condition learners (a) were given precise written instruc-
tions that explained how to successfully collaborate in carrying out each step in the
learning phase and (b) were prompted to both ask each other questions to share their
knowledge. Thus, we cannot presently determine how much each of these specific
manipulations contributed to the learning gains. It is clear, however, that vicarious
analogs of the prompting and guidance provided by the experimenter at specific
junctures in both the model and script conditions could be readily implemented in
computer-based vicarious environments. A final note on Rummel and Spada (2005)
is that those in the unscripted condition, who received no guidance, failed to out-
perform the non-intervention condition. This suggests, consistent with some earlier
findings (Craig, Driscoll, & Gholson, 2004; but see Chi et al., 2008, below), that col-
laboration per se , without any further guidance, may be of limited value. Research
relating to the roles of prompting and collaborative viewing has been reported and
will be considered in that order.
Hausmann and Chi (2002, Exp. 2) contrasted the role of prompts to self-explain
presented by a human tutor, who was sensitive to the learner's current knowledge
state, with automated prompts presented at arbitrary locations by computer in a
vicarious learning environment. As indicated above, research has established that
self-explanations lead to learning gains, that prompting promotes these explana-
tions (Chi et al., 1989, 1994), and that vicarious self-explanations may also lead
to learning gains, at least when combined with deep questions (Craig, Brittingham,
et al., 2009). In a preliminary study, Hausmann and Chi (2002, Exp. 1) presented
two groups of learners with 62 computer-generated statements describing how the
human circulatory system works. Vicarious learners were instructed at the outset
either (a) to generate and type their self-explanations for each of the 62 state-
ments using a keyboard or (b) to simply listen to each statement, with no keyboard
provided. No further instructions or prompts were provided to either group. Few
self-explanations were exhibited. Learners in the group instructed to self-explain
at the outset of the session averaged about one each and the two groups did not
differ on any measure. It seems, then, that simply instructing students to generate
self-explanations at the outset of a learning session without training or practice (cf.
Ainsworth & Burcham, 2007, on effective training procedures) is ineffective, at least
when they are required to type them.
In their main study, Hausmann and Chi (2002, Exp. 2) contrasted human prompt-
ing with automated prompting by computer as learners observed the same 62
statements on the circulatory system that were presented in Exp. 1. Participants
who received human prompting had two options following presentation of each
statement presented on the monitor: they could either type a self-explanation or
they could type “ok,” signaling they had nothing to say. What was typed immedi-
ately appeared on the prompter's monitor, and (only) if it was deemed appropriate
the human prompter requested a self-explanation prior to presentation of the next
Search WWH ::




Custom Search