Information Technology Reference
In-Depth Information
Example: Mobile Text Entry
situations to subtly adjust the nature of the inter-
action—the “background interaction... using
naturally occurring user activity as an input that
allows the device to infer or anticipate user needs”
described in (Hinckley, Pierce, Horvitz, & Sinclair,
2005). For example, if tapping is less accurate
when a user is walking, the display could adjust
to a mode which had fewer, but larger, buttons.
In this article, however, we will concentrate on
the use of such information to analyse user be-
haviour in greater temporal detail than is typical
in mobile usability trials. Rather than performing
a trial, and then asking subjective questions, or
analyzing video footage, we will classify activity
from sensors on the device or user, and relate
these to any log of explicit interaction activity
during the evaluation. If a user has an unusually
high error rate, can we better determine exactly
what was happening at that point in time in each
case?
This approach is obviously related to research
in context-dependent interaction which used
information from sensors to infer context of use
(walking, running, in car, inside, outside), to allow
more appropriate behaviour from the device. (Yi,
Choi, Jacko, & Sears, 2005) used an accelerometer
in evaluation, but used mean activity over the
whole period in different conditions, rather than
detailed results during the evaluation.
These techniques can be combined with a
system such as Replayer described by (Morrison,
Tennent, & Chalmers, 2006). This is a system
designed to aid usability evaluation and provides
tools that allow evaluators from different back-
grounds to easily view, annotate, and analyse
multiple streams of heterogeneous data logged
during a usability study. These data streams could
potentially be obtained from multiple sensors
attached to a mobile device. A first prototype of
this, using the MESH device used in this article,
is described in (Morrison, Tennent, Williamson,
& Chalmers, 2007).
An illustration of how the method could be used is
that of mobile text entry. The questions we might
want to answer for a given method could be: do
people use it on the go, as well as when station-
ary? How much slower are they when they use it
while walking? Do they enter text continuously,
but slowly, or do they stop every few metres to
enter more text? How is their error rate related to
their walking speed? Do they link the entry of a
new character with a new step? What is the effect
on walking speed, when entering text? If the user
enters text in a car, or bus, how are they affected
by movement of the vehicle? Do they wait until
the bus stops then enter text?
Figure 2 below shows a time-series of accel-
erometer readings while a user enters text, while
seated as a passenger in a car in urban rush hour
traffic with frequent stop-start activity.
Figure 3 shows the example of a user entering
text in various contexts. The user started to enter
text while sitting, stood up, walked around some
narrow corridors avoiding objects, down stairs,
along a straight corridor, up stairs, then returned
to seated position, and entered more text while
resting his hand on a table. The plots show the
overall activity of the user, along with the through-
put of characters entered at each point. In the
walking case, we can see text entry pause, as the
user takes a seat just after 140s, and we see
faster entry rates while seated in the car, compared
to walking. The text entry rates while walking are
nevertheless fairly constant. This illustration acts
as an example of how the accelerometers can give
us extra information from which we can infer
more about what was happening at each point in
the interaction.
In this article, which expands on our earlier
work in (Crossan, Murray-Smith, Brewster,
Kelly, & Musizza, 2005), we work towards a
quantitative understanding of the detailed interac-
tions taking place, via additional sensors on the
mobile device and user, so that we can better
Search WWH ::




Custom Search