Information Technology Reference
In-Depth Information
take part in the study due to security and privacy
reasons they were assuming.
automatically, so e.g. Windows error messages
in front of the display content can be detected.
Technical Challenges
Pragmatic Challenges
In order to support remote interaction via mobile
phones all of the systems described here utilized
Bluetooth. This led to a variety of technical chal-
lenges. With multiple devices in range Bluetooth
discovery is often an unreliable process and when
numerous devices are found (sometimes without
the textual 'friendly' name) users find it challeng-
ing to identify the desired device. We have found
that multiple Bluetooth dongles can be used to
increase the probability of discovering mobile
phones (but discovery carried out on mobile
phones remains problematic). Often multiple pub-
lic displays are installed next to each other. In the
MobiDiC deployment, we gave them descriptive
Bluetooth names (e.g. MobiDiC-Domplatz-left),
while for the Shopfinder it did not matter which
display the coupon is sent to. Another key problem
is that of installing applications on user's mobile
phones via Bluetooth (mitigated by supplying
users which pre-configured phones). Once the
problematic processes of Bluetooth discovery and
device pairing being completed successfully, the
application is sent to the user's phone (typically
a Java application packaged in a.jar file) and the
user is left with the task of installing and run-
ning it (which is often very challenging unless
the user is familiar with the process). Another
key technical challenge is that of providing high
levels of reliability. Often we found it difficult to
detect whether a remote display had crashed and
that failure may be localized, for example only
preventing one aspect of the system (such as an
interaction method) from working. For larger
deployments (Hermes, iDisplays, MobiDiC) es-
pecially we addressed this problem by developing
automated detection and notification of failures.
One method that proved robust against failures
was to take regular screenshots and compare them
A range of additional pragmatic challenges
emerged from experience of our field trials, these
included:
Difficulty of observing users - With many
users, each interacting only for a small
number of seconds, we found it difficult to
explore why a user interacted with the sys-
tem (e.g. for idle investigation or in order
to carry out a task).
Difficulty to obtain users for evaluation -
Results of user studies are often skewed by
certain user groups (e.g. younger people
typically being far more prepared to inter-
act and more au fait with technology). It is
often difficult to find non-users in an open
community to explore their motives.
Difficulties for data logging - Collecting,
storing and interpreting usage can be chal-
lenging in itself but it is also difficult to in-
vestigate the 'trace' of genuine users with
the background noise of other users idly
playing with the system.
Study setup problems - The investigators
are faced with decisions such as wheth-
er participants should be provided with
phones or use their own.
Difficulty in providing content - Our pro-
totype systems require content of interest
to potential users, without which adoption
is only a remote possibility, and we found
providing this content challenging.
COMBINING EVALUATION METHODS
The application of one single evaluation method
alone often may highlight an interesting phe-
nomenon, but usually does not provide enough
Search WWH ::




Custom Search