Information Technology Reference
In-Depth Information
controlling for the selection of clearly irrelevant
invariants.
The Yule's Q measure was constructed first by
calculating the percentage of invariants a student
chose out of those invariants experts chose as
relevant (h, or hit rate) as well as the percentage
of invariants a student chose out of those invari-
ants experts deemed clearly irrelevant (f, or false
alarm rate). Invariants that are technically correct
but less relevant to a problem were ignored in
this computation. The Yule's Q score was then
calculated by the formula: (h-f)/(h-2fh+f). A Q of
one implies perfect discrimination of the relevant
from irrelevant invariants, and zero implies chance
performance.
incorrectly. One student even suggested, “I was
very impressed with the information provided to
learn from mistakes. I think that that information
should be provided regardless of whether the an-
swer was right or wrong so that if the answer was
just a guess I could solidify my understanding.”
Students also mentioned they thought the test
questions helped reinforce concepts they had
learned and better apply what they had learned.
When we asked those participants who received
instruction on invariants what they believed in-
variants are and how they are used, the students
primarily thought of invariants as a method for
solving a problem. When asked what is an invari-
ant, one student responded: “A circuit invariant is
a certain method of circuit analysis needed to solve
the problems presented.(ex: ohms law, power,
etc).” When asked why it is useful to analyze a
circuit by considering the invariants, that student
also responded: “It gives the student an idea of
where to start the problem.” Other students also
responded: “It lets you know how to solve the
circuit and how to solve like circuits in the future,”
and “It allows one to find and use the necessary
method of solution quicker.”
Results of Invariant Selection Analysis
The average discrimination of invariants (as
calculated by Yule's Q) when a correct answer
to a circuit problem was chosen was 0.53. The
average discrimination when an incorrect answer
was chosen was 0.39. Thus students were more
likely to select those invariants that experts deemed
relevant on questions they answered correctly.
We found, however, that students' selection of
relevant invariants declined from the beginning of
the tests to the end. The graphs below reveal this
pattern across each of the categories of questions
and across both classes. Different explanations
may be provided for this pattern of results. One
is that the questions grow more difficult from the
beginning to the end of a test. Another explana-
tion is a fatigue or indifference factor. Students
may have been concentrating only on getting the
correct answer to questions, and gradually paid
less attention to the invariant selection.
Problems with Inductor
We identified certain problems from our tests of
the Inductor environment as well. The student
participation rate in our pilot studies was low,
which we believe was partly due to the fact that
use of the tool was not connected to their current
class work. We found the outside resources used
were not sufficient for addressing many of the
difficulties students had in applying invariants
to solve problems. Ultimately, Inductor was still
primarily a “test”, and not an engaging, motivating
environment for learning about circuit behavior.
We believe that the inclusion of more open-ended
challenge problems that include diagnosis and
design questions will motivate the students to
think deeper and begin to see the importance of
understanding how the bridges between invariants
Student Survey Responses
After completing both tests, students answered
questions on a follow-up survey. The students
responded that they liked and used the outside re-
sources and the hints we provided after answering
Search WWH ::




Custom Search