Information Technology Reference
In-Depth Information
The self-explanations from this text were categorized as paraphrases, irrelevant
elaborations, text-based elaborations, or knowledge-based elaborations. Paraphrases
did not go beyond the meaning of the target sentence. Irrelevant elaborations may
have been related to the sentence superficially or tangentially, but were not related
to the overall meaning of the text and did not add to the meaning of the text.
Text-based elaborations included bridging inferences that made links to information
presented in the text prior to the sentence. Knowledge-based elaborations included
the use of prior knowledge to add meaning to the sentence. This latter category is
analogous to, but not the same as, the global-focused category in Experiment 1.
Results . In contrast to the human coding system used in Experiment 1, the cod-
ing system applied to this data was not intended to map directly onto the iSTART
evaluation systems. In this case, the codes are categorical and do not necessarily
translate to a 0-3 quality range. One important goal is to be able to assess (or
discriminate) the use of reading strategies and improve the system's ability to ap-
propriately respond to the student. This is measured in terms of percent agreement
with human judgments of each reading strategy shown in Table 6.3.
Table 6.3. Percent agreement to expert ratings of the self-explanations to the Coal
text for the LSA2/WB2-TT and TM2 combined systems for each reading strategy
in Experiment 2.
Reading Strategy
LSA2/WB2-TT TM2
Paraphrase Only
69.9
65.8
Irrelevant Elaboration Only
71.6
76.0
Current Sentence Elaboration Only
71.9
71.2
Knowledge-Based Elaboration Only
94.6
90.3
Paraphrase + Irrelevant Elaboration
79.7
76.6
Paraphrase + Current Sentence Elaboration
68.2
67.3
Paraphrase + Knowledge-Based Elaboration
84.6
81.2
The results show that both systems perform very well, with an average of 77%
for the LSA2/WB2-TT system and 75% for the TM2 system. This approaches our
criteria of 85% agreement between trained experts who score the self-explanations.
The automated systems could be thought of as 'moderately trained scorers.' These
results thus show that either of these systems would guide appropriate feedback to
the student user.
The score for each strategy score (shown in Table 6.3) can be coded either
0=present or 1=present. With the current coding scheme, only one strategy (out of
seven) will be given a value of 1. We are currently redefining the coding scheme so
that each reading strategy will have its own scores. For example, if the explanation
contains both paraphrase and current sentence elaboration, with the current coding
scheme, “Paraphrase + Current Sentence Elaboration” will be coded as a 1. On
the other hand, with the new coding scheme, we will have at least 3 variables:
(1) “Paraphrase” will be coded as a 1 for present , (2) “Elaboration” coded as a
1 for present , and (3) “Source of Elaboration” coded as a 2 for current sentence
elaboration .
 
Search WWH ::




Custom Search