Information Technology Reference
In-Depth Information
that much of the developing competence with
and appropriation of the system we observed
would not have come to the fore. Multiple plays
or long-term use became imperative to a deeper
understanding of system use, and we have tried
to maintain this evaluation principle in all of the
subsequent studies discussed in this chapter.
Errors in positioning technology and patchy
network coverage are usually considered to detract
from a user experience. Treasure was designed
to exploit these factors, changing them from
problems into resources for the game. However,
these factors did still prove to be problematic
when it came to collecting data for evaluation.
Unlike lab-based experimentation, the log data
gathered was often unreliable in the sense that
the recorded position did not necessarily represent
the actual location of the player when an entry in
the log was recorded. Inaccurate positioning and
intermittent connectivity meant that it was pos-
sible for there to be several versions of the game
state at any one time—one for each player and
one for the server. In order to make sense of all
of these different streams of data, the informa-
tion had to be synchronised. This is normally a
very labour-intensive task (Crabtree et al., 2006),
complicated by the need to explore circumstances
of play and interaction by synchronising events
captured by multiple data sources with multiple,
sometimes conflicting, states of the system at any
one time. Such challenges inspired the design
of Replayer (Morrison et al., 2006), an analytic
tool that integrates and synchronises log data
from multiple sources to allow quantitative and
qualitative forms of exploratory data analysis, an
issue also explored by Greenhalgh et al. (2007).
Like most evaluation, Treasure's analysis was
conducted retrospectively—after each game or
set of games. However, due to the limited space
within which the game was played, evaluators
were able to directly observe the play. This meant
that they could use their observations to tailor
the questions posed during the interviews that
followed each game, enabling them to prompt
participants to elaborate on areas of the trial that
seemed significant.
EVALUATING A SYSTEM
USED IN EVERYDAY LIFE
Feeding Yoshi was also designed to run on hand-
held devices and exploit 802.11 (Bell et al., 2006).
Rather than connecting to a single wifi access
point (AP), it used the distribution of secure and
unsecured APs as a resource for the game, with
players collecting 'fruit' which grew in unsecured
APs and 'feeding' it to Yoshis in the secure APs.
Feeding Yoshi was designed to be played over
a much wider area and over a much longer time
period than Treasure, with the intention that users
would have a chance to fit it into the contexts and
routines of their everyday lives.
The trial participants consisted of two teams
in Glasgow, one in Derby and one in Nottingham,
with the study lasting a full week. Unlike Trea-
sure, the evaluation put no constraints on where
the game could be played—participants could
play anywhere that wireless access points could
be readily found, such as office blocks, cafes and
suburban areas. As a result of this, and of our
interest in discovering how players responded
to the contingencies of the technology and of
the everyday setting in which they were playing,
the approach taken to evaluating Treasure was
infeasible here. The game was not run within a
semi-controlled environment, meaning that the
main constraint put on game play was players'
existing circumstances of work, leisure and home
life. Therefore, evaluators were only occasionally
able to observe players, as they were often spread
out over different cities and there was no guar-
antee when or where they would play the game.
Capturing video was similarly difficult; since a
main research question was examining where and
when users would choose to play, there was little
point in arranging contrived meetings to video
system use. In consideration of these challenges,
Search WWH ::




Custom Search