Information Technology Reference
In-Depth Information
to estimate its dominance and probability of occurrence over the total population of
users.
In combination, rich situated insights derived from holistic methods may inform
the validation of artifacts while reductionist approaches to evaluation may quantify
the importance of a given experience and thus minimize the risk of overemphasize
interesting but rare experiences.
We argue for a missing link between validation and extrapolation (see figure 1.3).
Below, we describe how this is addressed in relation to the two research foci of this
manuscript: understanding interpersonal diversity in users' responses to conceptual
designs, and understanding the dynamics of experience over time.
1.4.1
Understanding Interpersonal Diversity through Personal
Attribute Judgments
Den Ouden (2006) revealed that the majority of soft reliability problems related to
the concept design phase and were particularly rooted in design decisions relating
to the product definition. This insight suggests that design decisions made early in
the design process may not be adequately grounded on empirical user insights.
Traditional approaches to measuring users' responses to artifacts lie in the a-
priori definition of the measures by the researchers. This approach is limited in at
least two ways when one is concerned with capturing the richness of and diver-
sity in user experience. First, the a-priori definition of relevant dimensions is inher-
ently limited as researchers might fail to consider a given dimension as relevant,
or they might simply lack validated measurement scales, especially in developing
fields such as that of user experience where radically new constructs are still being
introduced. Secondly, one could even wonder whether rating a product on quality
dimensions that are imposed by the researcher is always a meaningful activity for
the user, for example when the user does not consider a quality dimension as rele-
vant for the specific product. There is increasing evidence that users are often unable
to attach personal relevance to the statement provided in psychometric scales due to
a failure to recall experiential information that relates to the statement or due to
lengthy and repetitive questioning. Larsen et al. (2008b) reviewed a number of stud-
ies employing psychometric scales in the field of Information Systems. They found
for the majority of studies the semantic similarity between items to be a significant
predictor of participants' ratings (.00
R 2
.63) . In such cases, they argued partic-
ipants are more likely to have employed shallow processing (Sanford et al., 2006),
i.e. responding to surface features of the language rather than attaching personal
relevance to the question.
An alternative approach to predefined questionnaires lies in a combination of
structured interviewing, that aims at eliciting attributes that are personally meaning-
ful for each partcipant, with a subsequent rating process performed on the attributes
that were elicited during the interview. This approach aims at increasing the diver-
sity and relevance to the individual of concepts that are measured, thus resulting
in richer insights. However, the techniques required for the quantitative analysis of
<
<
Search WWH ::




Custom Search