Information Technology Reference
In-Depth Information
same latent construct (e.g. performance expectancy), at the same time being more
or less uni-dimensional.
The development of psychometric scales is often described as a three-step pro-
cess: item generation, scale development, and scale evaluation (Hinkin, 1995). The
first step aims at enhancing the content validity of the questionnaire (i.e. that a com-
plete coverage of the domain of interest is obtained through the proposed items);
the latter two steps aim at enhancing the convergent and discriminant validity of the
questionnaire (i.e. that each item correlates highly with other items that attempt to
measure the same latent construct, and weakly with items that attempt to measure
different latent constructs).
Once a substantial set of latent constructs have been developed for a given field,
questionnaires may be used by researchers and practitioners to assess the value of
products. Using validated questionnaires, one can measure how two or more prod-
ucts compare on a given quality dimension (e.g. trust), or compare two different
generations of the same product to assess the impact of the redesign process.
Proponents of the holistic approach in user experience criticize the use of psycho-
metric scales for their inability to capture the richness and diversity of experience
(see Blythe et al., 2007). Below, we will try to expand on this criticism by highlight-
ing two limitations of a-priori defined psychometric scales. We will then introduce
the use of personal attribute judgments as a means to account for diversity in users'
experiences with interactive products.
Firstly, measures that are a-priori defined are inherently limited in accounting for
the users' perspective. Especially in developing fields such as that of user experi-
ence, where the development of new constructs is still in its infancy, researchers
might fail in capturing a relevant experience dimension due to a failure in recog-
nizing its importance or simply due to the absence of relevant measurement scales.
This issue has been repeatedly highlighted in studies of user acceptance of informa-
tion systems that employ pre-defined measurement and structural models such as
the Technology Acceptance Model (TAM) (Davis et al., 1989). A number of stud-
ies have reported limited predictive power of the Technology Acceptance Model,
in some cases accounting for only 25% of the variance in the dependent vari-
able (Gefen and Straub, 2000). Lee et al. (2003) reported that “the majority of
studies with lower variance explanations did not consider external variables other
than original TAM variables” . A typical case is illustrated in figure 2.1 which dis-
plays a two-dimensional visualization of the perceived dissimilarity of three systems
(van de Garde-Perik, 2008). In the mentioned study, users were asked to judge the
overall dissimilarity of the three systems as well as rate the systems on a number of
pre-defined dimensions such as the perceived trust, risk, usefulness and ease of use.
The configuration of the three stimuli is derived by means of Multi-Dimensional
Scaling on the original ratings of dissimilarity, while the latent constructs are fitted
as vectors in the two-dimensional space by means of regression. While a number
of insights may be derived from this visualization, one may note that systems 3 and
1 are clearly differentiated in terms of their overall dissimilarity while none of the
predefined attributes can explain this dissimilarity. In other words, the measurement
Search WWH ::




Custom Search