Information Technology Reference
In-Depth Information
As described by (Kjeldskov & Stage, 2004),
there is a wealth of guidelines for running labo-
ratory-based usability studies, but these studies
will lack realism for mobile devices. To test mo-
bile devices in mobile settings, however, we are
required to use field-based evaluations, which are
far from straightforward to implement. Kjeldskov
and Stage's review of the literature points out three
difficulties: 1. It is difficult to define a study that
captures the use-scenario, 2. It is hard to use many
established evaluation techniques, and 3. Field
evaluations complicate data collection, and limit
experimental control. Examples of papers where
researchers have proposed additional techniques
such as distance walked and percentage preferred
walking speed to assess usability include (Brew-
ster, 2002), (Petrie, Furner, & Strothotte, 1998),
and (Pirhonen, Brewster, & Holguin, 2002),
using a mix of qualitative questions and manual
recording of walking pace. (Mizobuchi, Chignell,
& Newton, 2005) examine the effect of key size
on handheld devices while walking.
(Barnard, Yi, Jacko, & Sears, 2005) review
the differences between desktop and mobile com-
puting, and they observe for researchers aiming
to isolate the effects of motion from other con-
taminants, the idea of such uncontrolled studies
can be daunting. Control is critical for empirical
data collection methods employing the scientific
method.
Roto et al. (2004) discuss the use of Quasi-
experimentation based on best possible control
over nuisance variables, coupled with record-
ings of the user, interaction with the device and
environment. The innovation in their recordings
was the use of multiple cameras worn around the
body of the user, and attached above the screen of
the mobile device. This does make the recording
process obtrusive and might change both user
behaviour and that of people in the environment
around them. It is also time-consuming to analyse
after the experiment. This recording arrangement
has been used successfully in (Oulasvirta, Tam-
minen, Roto, & Kuorelahti, 2005) to investigate the
fragmentation of attention in mobile interaction.
INSTRUMENTED
USABILITY ANALYSIS
Here, we define 'Instrumented usability analysis'
as the use of sensors during a usability study which
provide observations from which the evaluator
can infer details of the context of use, or specific
activities or disturbances.
Sensors such as accelerometers, magnetom-
eters and GPS systems have been added to mobile
devices, and are now in mass-production in mobile
phones. These have been included for informing
the user (about location, number of steps taken),
or giving the user novel input mechanisms, such
as gesture recognition or input for game playing.
There are many examples of both prototype
and commercially available sensors and sensor
packs for motion or context sensing. (Fishkin,
Jiang, Philipose, & Roy, 2004) describe a system
for detecting interactions with RFID technology
and suggest it can be used to infer user move-
ment by examining signal strengths from a sensor
network. (Gemmell, Williams, Wood, Lueder,
& Bell, 2004) describe the SenseCam system
used to capture life experiences without having
to operate complex recording equipment. Sense-
Cam combines a camera with a group of sensors
including an accelerometer, infrared, light and
temperature sensors and a clock to automatically
detect, photograph and map out changes in context
or events during a persons day. (Kern & Schiele,
2003) describe a hardware platform combining
multiple wearable accelerometers in order to infer
the user's context and actions. They demonstrate
how these acceleration signals can be used to
classify user activity into actions such as sitting,
standing, walking, shaking hands and typing.
More recently, the general purpose Bluetooth
SHAKE (Sensing Hardware Accessory for Kines-
thetic Expression) inertial sensor pack, described
Search WWH ::




Custom Search