Information Technology Reference
In-Depth Information
The primary risks in using this approach
were (1) some of the measures could prove to
be unreliable, reducing the number of measures
per construct even further; and (2) the limited
number of measures might tap only a subset of a
given construct. To reduce these risks, the choice
of items was made partially on a face validity
basis (in an effort to identify the most relevant
measures), and partially from the results of a
pilot study. The pilot study allowed us to refine
the measures, resulting in the final set used in
the main study. Note that most of the items used
in this study were also included in the empirical
work conducted by Venkatesh et al. (2003) as
they tested for commonality across measures
from previous work. Since Venkatesh et al. (2003)
found that these measures were similar to others
purporting to measure the same constructs (i.e.,
they loaded together in a factor analytic sense), we
have more confidence that the measures selected
for this study are in fact reasonable measures of
the constructs (further details of the pilot study
and a table showing the similarities between the
measures used in this study and measures used
in related work is available upon request from
the authors).
completed one simple assignment using it. At the
time of the second measurement (T 2 ), they had
received additional training and had completed
two additional (more complex) individual assign-
ments as well as the group project (which required
fairly extensive use of the software).
All students in five different sections of the
course were asked to participate in the study. No
inducements were offered, and students were
given the option of not participating. All who
were invited agreed to participate. Questionnaires
were distributed in class to all students that were
in attendance during specific class periods. 219
students completed the pretraining question-
naire, and 209 completed the postquestionnaire;
those not completing both (i.e., those that were
absent on one of the days) were removed from the
sample. In total, 193 respondents completed both
pre and postmeasures. Questionnaires from four
respondents were removed due to missing data,
leaving a net sample size of 189. The questionnaire
responses were associated with an identification
number, making it possible to match responses
by respondent across the two time periods.
Of the 189 respondents, 117 were male and 72
were female. All respondents were traditional third
and fourth year undergraduate students, and all
had a reasonable level of familiarity with personal
computers. We asked the respondents to rate their
skill level with PC operating systems, word pro-
cessing, spreadsheets, and e-mail using a 7-point
scale with anchors of Novice (1), Intermediate
(4), and Expert (7). We then summed across the
four technologies to obtain a general measure of
self-rated expertise. The scores ranged from 10
to 28 (out of a possible 28), with a mean of 19.3
and standard deviation of 3.3.
Although the generalizability of results will
be constrained somewhat since the use of Access
was mandatory for the students, many situations
involving the use of information technologies by
professionals are also mandatory. In addition,
since our intention measure is focused on future,
optional use by the respondent, the constraints on
sample and proCedures
Data were collected from junior and senior under-
graduate students (business majors) completing
a required course in management information
systems (MIS). The respondents were required
to use the Microsoft Access database manage-
ment system for a group project (two students per
group), representing 10% of their final grade. All
students received some training with the software,
and completed three individual assignments with
it prior to completing the group project.
Measures were taken at two time periods
approximately two months apart. At the time of
the first measurement (T 1 ), the respondents had
received a demonstration of the software and had
Search WWH ::




Custom Search