Information Technology Reference
In-Depth Information
generalizability imposed by the mandatory nature
of the task are mitigated somewhat.
means and standard deviations (at Time 1), and
the item loadings obtained from the PLS run for
Times 1 and 2.
Note that for social factors, both the weights
and loadings (respectively) are displayed. Since
the social factors construct was modeled as for-
mative (Barclay, Higgins, & Thompson, 1995),
the important indicators are the weights (not the
loadings), and the criteria considered is whether
or not the weights are statistically significant. In
both models, and for all three social factor items,
the weights were positive and significant at p <
0.05. For adequate item reliability for the reflective
constructs, ideally loadings should be higher than
0.7 (Barclay et al., 1995). All observed loadings
were close to or above the desired level (i.e., 0.67
or greater). We also examined the loadings and
cross-loadings for all items at both time periods
(see Table 2), and observed no violations (i.e., all
loadings were greater than 0.65, and all cross-
loadings were less than 0.60).
Table 3 shows the results of further tests for
the reliability and validity of the measures. The
average variance extracted (AVE) is shown for
each construct, as is the Fornell and Larcker
(1981) measure of composite reliability (CR). For
adequate scale reliability, AVE should be greater
than 0.5. CR may be interpreted similarly to
Cronbach's alpha. That is, 0.70 may be considered
an acceptable value for exploratory research,
with 0.80 appropriate for more advanced stud-
ies. Table 3 also shows the correlations between
constructs, and the diagonal, shaded cells display
the square root of the average variance extracted.
For adequate discriminant validity, the values on
the diagonal (shaded cells) should be greater than
the off-diagonal elements. The corresponding test
results at Time 2 are not shown here (for space
reasons) but the pattern of results was similar to
those shown for Time 1.
At Time 1, all constructs have composite
reliabilities in excess of 0.80, with the exception
of social factors. Average variance extracted is
above 0.50 for all constructs except social factors
results
The measures were tested using PLS-Graph (Chin
& Frye, 2001) by running the full research model
with the data collected at Time 1 and again with
the data collected at Time 2. The first test of the
measures was to examine the item loadings to
assess individual item reliability. As in the pilot
study, there were weaknesses evident in the load-
ings for the computer self-efficacy measures for
the data collected at Time 1. These results were
somewhat surprising, since the Compeau and
Higgins measures of computer self-efficacy had
demonstrated adequate psychometric proper-
ties in previous use (e.g., Compeau & Higgins,
1995a, 1995b; Compeau et al., 1999). Gundlach
and Thatcher (2000) argue that the self-efficacy
construct is multidimensional, reflecting human
assisted vs. individual self-efficacy. This would
be consistent with our findings. Factor analysis
of the eight items also supported this view. A
principal components analysis resulted in two
factors, one of which included items 1, 2, 6, and
7 and the other which included 3, 4, 5, and 8.
Since the variation was greater on the first set and
there was less risk of a ceiling effect, we chose
to retain those items.
The loadings for one measure of perceived
behavioral control (PBC2) were low (below 0.5)
for both time periods. In retrospect, this finding
should not have been a surprise. PBC2 states that
“the amount I use Access is within my control.”
Since the respondents were required to use Ac-
cess to complete their projects, there was very
little variation on this item. This issue was not
a problem in the pilot study, since those respon-
dents had the opportunity to use other software
for completing the assigned task. We therefore
decided to remove this item, and re-run the models.
Table 1 shows the final list of items, including the
Search WWH ::




Custom Search