Information Technology Reference
In-Depth Information
We have evaluated students' satisfaction with TMC using anonymous feedback
from our courses. When the first beta of TMC was released in September 2011 for a
200 student CS1 course, one half of the course exercises were assessed using TMC.
Only 58% of the feedback concerning the use of TMC was positive. Based on the
experiences and feedback from the first course, major improvements were made.
Since finishing the improvements in January 2012, the student feedback regarding
TMC has been very good: 80% of the feedback regarding TMC during spring 2012
CS1 was highly positive, and in our first MOOC in programming it even got better,
most of the feedback was praising and only 9% had some complaints. The majority
of the negative comments were not severe, usually complaining about minor details
such as “it was quite irritating when TMC demanded a questionmark..”.
During fall 2012, 79% of the comments regarding to TMC were positive or highly
positive. The subtly negative comments were from early parts of the course, which
leads us to assume that some of the feedback can be explained by the students strug-
gling with learning a programming language, and not understanding the scaffolding
messages properly. In the latter parts of the course (final 6 weeks), feedback related
to TMC has only been positive. We have also observed (in person, through IRC
channels and emails) several spontaneous testimonials for the superiority of TMC
when compared to current assessment automata used in several other universities
and certain MOOCs.
When considering the educational value of TMC, we must look at the impact of
XA in our programming courses. Before XA was introduced, the pass-rate average
for our CS1 course has been 55.49% over 16 course instances. After introducing
XA, average pass-rates before TMC have been 73.45% over 3 course instances,
and after applying TMC in our XA courses the average pass-rates are currently at
75.76% when averaged over 2 course instances.
6
Conclusions and Future Work
We have described TMC, an automatic assessment system that seamlessly supports
XA-style programming courses where the emphasis is on meaningful, scaffolded
exercises and bi-directional communication between students and instructors. We
believe that the main success factors of TMC lie in (1) the multi-level feedback
mechanism of XA, (2) the use of industry-level programming environment, (3) scaf-
folding provided by the tests, and (4) the small goals inside the bigger goals.
TMC has improved our course instructors work by making it more meaningful.
By removing trivial exercise checking and scaffolding, instructors can spend more
time on more demanding scaffolding tasks. TMC helps advisors by providing mean-
ingful output that can be used for scaffolding students. Using TMC has also enabled
us to organize pedagogigally meaningful MOOCs in programming.
Using an automated assessment system has also its negative sides. We have ob-
served that some of our students rely too much on automatic scaffolding and do not
write spontaneous test-programs of their own. Due to the XA context this is not as
problematic as it could be - the advisors that monitor the students' progress scaffold
them to create test-programs, pushing the students to think outside of their own box.
Search WWH ::




Custom Search