Information Technology Reference
In-Depth Information
ancillary texts (Levinstein, Boonthum, Pillarisetti, Bell, & McNamara, 2007). The
extended practice module looks and functions exactly the same way as the regular
practice module. This module is where students spend the majority of their time
and are afforded the opportunity to apply the newly acquired strategy knowledge
to new and varied science texts. These texts could be ones already present within
the system or they could be newly added and assigned texts from their teacher.
This extended practice module allows for a long-term interaction with iSTART and
provides self-paced instruction at the individual level.
Providing Feedback in iSTART
Feedback on the content and quality of the self-explanation is a critical component
of practice. This feedback needs to be delivered rapidly to the participant. During the
practice phase, the agents' interactions with the trainee are moderated by the quality
of the explanation. For example, more positive feedback is given for longer, more
relevant explanations, whereas increased interactions and support are provided for
shorter, less-relevant explanations. The computational challenge is for the system
to provide the student appropriate feedback on the quality of the self-explanations
within seconds. This evaluation comprises four steps: First, the response is screened
for metacognitive expressions (such as “I don't understand what they are saying
here”). Second, the remainder of the explanation is analyzed using both word-
based and LSA-based methods (see McNamara et al., 2007). Third, the results from
both methods' analyses are integrated with the metacognitive screening to produce
feedback in one of the following six categories: (1) response to the metacognitive
content; (2) feedback that the explanation appears irrelevant to the text; (3) feed-
back that the explanation is too short compared to the content of the sentence; (4)
feedback that the explanation is too similar to the original sentence; (5) feedback
that makes a suggestion for the following sentence; or (6) feedback that gives an
appropriate level of praise.
Part of iSTART's evaluation procedure is based on LSA. The LSA method con-
trasts with the word-based evaluation, which calculates literal matching among
words at a surface level. LSA is a computational method used to represent word
meanings from a large corpus of text (Landauer & Dumais, 1997; Landauer,
McNamara, Dennis, & Kintsch, 2007). The corpus of text is used to create a
word by document co-occurrence matrix, which is then put through singular value
decomposition to generate a high dimensional vector space. Within LSA, the
conceptual similarity between any two language units (e.g., words, paragraphs)
is determined by measuring the similarity of their representations within the
multi-dimensional vector space (called a cosine). Empirical evaluations of LSA
have shown comparable performance to human judgments of similarity of docu-
ments (Landauer & Dumais, 1997; Landauer et al., 1998), text coherence (Foltz,
Kintsch, & Landauer, 1998; Shapiro & McNamara, 2000), grades assigned to
essays (Landauer et al., 1998), and the quality of student dialogue contributions
Search WWH ::




Custom Search