Information Technology Reference
In-Depth Information
statement. Each learner in the automated prompting condition was yoked to one in
the human prompting condition. For example, if the learner in the human prompting
condition typed a self-explanation on statement 12 (whether prompted or not), the
computer prompted the yoked learner to self-explain on that statement. On average
vicarious learners in both groups generated 16 self-explanations and each averaged
seven integrative self-explanations (Chi, 2000; Chi et al., 1994; previous subsec-
tion also). Both groups exhibited significant learning gains and did not differ from
each other. When Hausmann and Chi combined the two groups and divided learners
on the basis of the number of integrative self-explanations (Chi et al., 1994, 2001)
into high and low, the high self-explainers showed learning gains about twice as
large as the low self-explainers. It seems clear, then, that prompting students to self-
explain promotes learning gains. Furthermore, as Housmann and Chi demonstrated,
automated prompting is readily implemented, and giving them at arbitrary loca-
tions during a learning session are just as effective as prompts presented by humans
who attempt to be sensitive to the learner's current knowledge state. Hausmann and
Chi (2002) and Chi (2000) clearly showed that prompting to self-explain leads to
learning gains. As noted above, though, in the Rummel and Spada (2005) report
we cannot determine how the prompting for each of the various activities may have
contributed to learning the collaboration skills or the knowledge about diagnosis
and treatment skills that were obtained.
Ge and Land (2003) reported support for the role of automated question prompts
in research on problem solving in an ill-structured task. College students in class-
room settings either (a) received the problemmaterials along with 10 major question
prompts probing a total of four knowledge categories or (b) they received only the
problem materials, with no question prompts. The knowledge categories prompted
by questions were as follows: (a) problem representation, (b) solution, (c) justi-
fication, and (d) monitoring and evaluation. Each of these category also included
some sub-question prompts. For example, after being asked to define (represent) the
problem, students were then asked to specify individual parts of the problem (Ge
& Land, 2003, Appendix A). They also manipulated peer interaction: each student
either worked in a group with three other collaborators or they worked alone.
The learners were asked to find an information technology-based solution for
placing items in a supermarket that would maximize how quickly customers could
locate them. While considering the problem individually or in groups, with or with-
out question prompts available, the learners wrote a two- to three-page report that
was used as a criterion task. Results revealed that learners who had the ques-
tion prompts available exhibited significantly more knowledge in each of the four
prompted categories: their problem representations, the solutions they generated, the
justifications they provided, and their monitoring and evaluation activities, whether
they worked with peers or alone. Ge, Chen, and Davis (2005) have more recently
suggested the possibility that specific kinds of question prompts may play differ-
ent roles in improving knowledge in the four categories they investigated. While
having question prompts available while problem solving increase knowledge con-
struction, direct comparisons with earlier research on questions that used very
different methodologies appear premature (Craig et al., 2000; Gholson & Craig,
Search WWH ::




Custom Search