Information Technology Reference
In-Depth Information
4.5
Discussion
This chapter has discussed two studies that tried to assess how users' experiences
with products develop over time. Besides having distinct goals, they employed di-
verse methodology for “measuring” the user experience: a reductionist and a holis-
tic one.
The first study was reductionist in nature. It employed a validated measurement
model (Hassenzahl, 2004) and sampled users perceptions across two points in time.
It then tried to identify variations in the structural relationships between the latent
constructs, i.e. the individual quality perceptions and the overall judgments of good-
ness and beauty. While longitudinal studies in user experience are scarce, similar
methodological approaches can be found in the field of Technology Acceptance
(Venkatesh and Davis, 2000; Venkatesh and Johnson, 2002; Kim and Malhotra,
2005). Such studies typically employ validated structural models across different
phases in the adoption of a system. For instance, Venkatesh and Davis (2000) em-
ployed the Technology Acceptance Model (Davis et al., 1989) over three points in
the adoption of information systems at work settings: before the introduction of the
system (inquiring into users' expectations), right after the introduction of the sys-
tem, and three months after the introduction.
An assumption inherent in this approach is that the relevant latent constructs re-
main constant, but their perceived value and relative dominance might alter over
time. But, especially in developing fields such as that of user experience, substantial
variations might occur over time even in what constructs are relevant to measure.
Some constructs, e.g. novelty, might cease to be relevant while others that were not
evident in studies of initial use might become critical for the long-term acceptance
of a product. Note for instance, the wider spectrum of experiences relating to daily
rituals and personalization that could not be captured by the measurement model
that we employed in the first study. This challenges the content validity of the mea-
surement model as relevant latent constructs might be omitted, but may also lead
to distorted data as the participant might fail in interpreting the personal relevance
of a given scale item to her own context, for instance when a latent construct and
its individual scale items cease to be relevant. In such cases participants may shal-
lowly process the statement of the scale and ratings may reflect superficial language
features of the scales rather than participant's perceptions (Larsen et al., 2008b).
Moreover, such approaches provide rather limited insight into the exact reasons
for changes in users' experiences. They may, for instance, reveal a shift in the dom-
inance of perceived ease-of-use and perceived usefulness on intention to use a prod-
uct (e.g. Venkatesh and Davis, 2000), but provide limited insight into the exact ex-
periences that contributed to such changes, the underlying motivations for changes
in users' preferences, and the contextual variations in product use.
Study 2 was more effective in providing rich insights into the exact reasons for
the dynamics of users' experiences over time. Beyond eliciting anecdotal reports
on users' experiences, through content analysis of the narratives, we were able to
quantify the dominance of different product qualities on users' overall evaluative
judgments. It enabled capturing aspects of experience that were not identified a-
Search WWH ::




Custom Search