Information Technology Reference
In-Depth Information
personnel, users will be frustrated by a lack of sufficient assistance (Swanson, 1987). Overall
users will be frustrated by a lack of flexibility in meeting changing data needs (Bailey and
Pearson, 1983).
Integrating and Interpreting Accessed Data
There are at least four more dimensions along which TTF could be evaluated as users attempt to
make sense of accessed data and incorporate it into their decision processes. Data must be suffi-
ciently accurate that it can be interpreted correctly (O'Reilly, 1982; Zmud, 1978); data from dif-
ferent sources that need to be integrated must be compatible (Epstein and King, 1982; Bailey and
Pearson, 1983); the presentation of the data (on screens or reports) must be easily interpreted
(Gallagher, 1978; Zmud, 1978); and the data must be sufficiently current (Swanson, 1987; Zmud,
1978). All together this suggests at least sixteen dimensions for evaluating the task-technology fit
of information systems and services.
Who Should be the Judge?
Presumably, there is some “true” underlying TTF, perhaps relative to the best technology avail-
able for a given individual and task. But we may never know exactly what that true TTF is. An
important question is: Who should make the evaluation of the task-technology fit of a given tech-
nology for a given set of users engaged in the given set of tasks? It would be possible to have
“experts” make the evaluation, assuming they had a thorough knowledge of the tasks, the tech-
nology, and the users. This might be called an “engineering evaluation” of TTF. A second alterna-
tive is to ask the users who utilize a system in carrying out their tasks to evaluate its TTF for them
personally. This could be called a “user-perception evaluation” of TTF. Each approach has its
strengths and weaknesses. The real question is: Which group has a better understanding of the task,
technology, and the individuals? Experts may miss important aspects of the task. There are plenty of
cases where users ended up employing a technology in unexpected ways because the designers
(experts) did not truly understand the tasks. On the other hand, there are certainly gaps between indi-
vidual perceptions of a technology and its reality. Here we make the assumption that users are goal-
directed individuals who are attempting to achieve good performance. In this light, we might expect
that they are sensitive to aspects of the technology that lead to higher performance, and thus are
capable of evaluating the TTF of systems. Under these assumptions, we can rely on user perceptions
for our measures of TTF.
Generating Questions
It is important to realize that even though a construct similar to one of the TTF dimensions may
be present in existing MIS questionnaires, these existing questionnaires often do not actually
assess task-technology fit. For example, the typical questionnaire for user evaluations of IS—that
is, user information satisfaction, or UIS (Bailey and Pearson, 1983)—asks users to rate the infor-
mation system in the abstract, for the whole organization. For example, the UIS definition of
accuracy is “the correctness of the output information”; respondents are asked to rate on a seven-
point scale from accurate to inaccurate, high versus low, etc. (Bailey and Pearson, 1983, p. 541).
First, data are rarely completely accurate, or need to be, but the questionnaire seems to ask
respondents to evaluate whether the data are completely accurate. Second, one might question
whether typical users are knowledgeable enough to answer this question in absolute terms, for the
Search WWH ::




Custom Search