Information Technology Reference
In-Depth Information
are field notes, and it can be hard to investigate
short interactions in detail. To remedy this, we
used Video Analysis with cameras installed on
top of three displays in the iDisplays deployment.
Video capture was triggered by motion detection to
record user behaviour. On one day, 378 situations
where users passed a display were analyzed. The
benefits of this technique were that interactions for
an entire day could be observed in detail, allow-
ing quantitative comparison of different displays.
However, the cameras capture only a very narrow
field in front of the display, and it is necessary
to capture multiple days of usage because many
users behave unnaturally when first conscious
of the fact that their actions are being recorded.
To observe even longer periods of time, we used
Automated Face Detection and logged the times
faces were found in front of displays. The main
benefit of this automatic technique is that user
behaviour can be compared over longer periods of
time. However, only the view times are captured,
and interesting behaviour may go unnoticed. An
in-depth discussion of the challenges associated
with the use of video (and other techniques such
as usage logs) to produce digital records of inter-
action in ubiquitous computing environments is
presented in (Crabtree et al., 2006).
own behaviour precisely. Repertory Grid Inter-
views proved useful to elicit the dimensions users
use to think about a given topic in their own words.
In one case, we asked users to compare different
displays regarding whether they had used them
with a navigation system that combines mobile
phones and public displays (Müller et al., 2008).
This resulted in an ordered list of dimensions that
influence whether people look at the displays,
for example whether the user can already see his
goal, or whether he currently looks at the phone or
the environment. This then helps in determining
which research questions may be worth pursu-
ing and which not. However, the interviews tend
to be very focused on the categories and lack
richness. We employed Contextual Inquiry to
investigate users' normal procedures when deal-
ing with noticeboards (Müller et al., 2007). From
this analysis, it was possible to identify different
kinds of posters and displays people are interested
in, as well as to identify opportunities for mobile
devices to fit into their workflow. However, this
kind of analysis is usually constrained to a few
typical situations that users believe are important,
and does not include in-situ observation. Probe
Packs proved useful in the Wray deployment for
identifying social spaces. A Comments Book in
the Wray deployment generated over 60 feature
requests, experience statements and suggestions.
Asking Users
Logging
Unstructured Interviews gave the opportunity to
gain further information from the user in-situ after
interaction with the display had taken place. To
find more detailed answers to specific questions,
Semi-structured Interviews proved useful. For
example it became obvious that most users only
used very specific information from the iDisplays.
The kind of information used, however, varied
greatly for different users. For example, while
one user only wished to view the clock another
user was not aware of the clock and only viewed
the bus departure times. However, because these
interviews take place after the user has finished
interacting, users are often not able to recall their
Interaction logging was implemented in all our
deployments and proved very useful in determin-
ing variations in long-term use of the systems.
For the MobiDiC Shopfinder for example, the
interaction logging showed that in seven weeks
the Shopfinder was downloaded 130 times, with
peak download times in the afternoon, and some
downloads as late as 2am or early as 7am. The
main benefit of interaction logging is that a lot
of data is generated automatically. When the logs
show some interesting patterns however, it is
Search WWH ::




Custom Search