Information Technology Reference
In-Depth Information
evaluated. For notetaking purposes, we considered
breakdowns to be incidents where events did not
seem to unfold as expected for at least one of the
parties involved in the incident.
Attention: In observations and interviews we
focused on what participants seemed to be (or said
they were) visually attending to during the class.
We asked them if they were paying attention to
the students (for instructors) or the instructors (for
students), and also noted when they appeared to
be paying attention to other things. For instruc-
tors, we were particularly interested in the extent
to which they paid attention to the satellite and
remote students versus the local ones.
Awareness: We were also interested in the
extent to which students and instructors at both
sites were aware of each other. While we did look
for signs of this in our observations, we relied
mostly on the interview data. Here we coded in-
stances where participants mentioned awareness
of specific people or groups of people.
For the questionnaire data, we used a combina-
tion of single scale items and aggregations. Where
aggregated constructs were used, they were tested
for consistency using Cronbach's α, and values
were above.7, within the range acceptable for
social science research (Nunally, 1978).
two campuses. Surprisingly, there appeared to be
a difference ( M Main = 78.37, SD Main = 5.99, M Satellite
= 73.72, SD Satellite = 7.09, t = 2.15, p <.05). We then
used stepwise linear regression to see if any de-
mographic variables, full or part time status, years
of university study, years of full time information
technology work experience, gender, or home
campus affected students' final grades. Using this
method, we found that only status (full-time or
part-time) was a significant predictor ( M Part-Time =
81.10, M Full-Time = 75.40, t = 3.50, p < 0.01; model
R 2 = 17.65, F = 9, p <.01). When these other fac-
tors are controlled for, moreover, presence at the
satellite campus was not a significant predictor
of student performance.
Given prior work (e.g., Caruso & Kvavik,
2005) we wondered if student experience with
communication technologies in academic settings
affected the perceived utility of our system and
performance in the class. Students were asked to
indicate their agreement with the statement “the
use of information technology results in prompt
feedback from instructors.” Those who agreed
with this statement tended to perform better in the
class than those who did not, when tested using a
one-way ANOVA with Tukey's multiple compari-
sons, and t-tests ( M Agree = 78.60, M Disagree = 69.63,
SD p = 6.13 p <.05). This suggests that students
who, from past experience, perceive technology
as helpful in terms of facilitating communication
with and feedback from their course instructors
performed better.
Next, we examined student satisfaction with
the system and performance. Students at the two
campuses did not differ in their satisfaction with
how the system worked. Those who felt the system
worked well, however, had significantly higher
final marks in the course ( M Agree = 81.54, SD =
5.24; M Neutral or Disagree = 75.30, SD = 5.67, p <.05).
Finally, we wondered about student attitudes
toward participation and instructor sensitivity as
they related to performance. Students were asked
to what extent they agreed with the statement “I
participated in this class as much as I wanted to”.
RESULTS
In this section we describe the results from our
case study. We describe student performance in
the course using the modified ePresence system,
student experience with the system and then dis-
cuss the experience of presenters.
Student Performance
To address whether or not the modified ePresence
system had an impact on students, we first explored
their performance, using final marks (unadjusted
grades) in the course. We first checked to see if
there was a performance difference between the
Search WWH ::




Custom Search