Information Technology Reference
In-Depth Information
The features of these categories of systems
make the mobility of the user possible, either
together with the system (for mobile devices),
or inside it (for ubiquitous systems). Moreover,
to work properly, these systems must be aware
of their surrounding environment. For instance,
an automotive GPS moves with the car (and so
with its driver) and is aware of traffic jams. An
augmented museum detects the presence of visi-
tors as they move through the halls.
These two characteristics -mobility and
context-awareness- of mobile devices and ubiq-
uitous systems raise new methodological issues
for system evaluation during user studies. It is thus
not only an “interactive system” we must evaluate,
but more generally an “interactive environment.”
In other words, the evaluation cannot only focus
on the device, but must also take into account the
context of use, and its variations while the user
is moving.
Traditional user evaluation methods. If we had
to define a canonical description of user evaluation,
we would say that, traditionally, user evaluation
takes place in a usability laboratory, that is to say a
closed room where the environment can be easily
controlled. The user is requested to perform the
tasks that have been chosen by the evaluators.
The evaluation methods are based on the observa-
tion of users either directly, or through one-way
mirrors, or via audio/video recording systems.
Frequently, the user is encouraged to think aloud
in order to facilitate the interpretation of his/her
activity. A facilitator may be present throughout
the experimentation or only at the beginning and
the end. His/her role is to give instructions to the
user, to answer any questions the user may have,
and to observe. To sum up, the two key elements
of user evaluation are observation and control of
the experiment variables.
Unfortunately, traditional user evaluations
methods cannot easily deal with the two char-
acteristics -mobility and context-awareness- of
mobile devices and ubiquitous systems. For
instance, the user evaluation of a smartphone for
skiers requires users to move -to ski down the
mountain- with the device in a complex context
-a ski resort- that could not be mimic in a tradi-
tional usability laboratory.
Motivations: user evaluation of interactive
environments. As we have said previously, mobile
devices characteristics -mobility and context-
awareness- raise new methodological issues.
The main issue is to set up a realistic interactive
environment while keeping enough control to
analyze the interactions between the users and
the interactive system.
So, the question is: what is the degree of
realism necessary to ensure the validity of user
evaluations? In practice, we have to determine
how to place the user in interrelationship with
these elements, either by simulating them in a
usability laboratory, or by using the elements of
the real-world in-situ. A priori, in-situ experiments
should always be best. But this type of experiment
is known to be complex to set up (Kjeldskov, Skov,
Als, & Høegh, 2004). Our aim is to determine if
the higher cost of in-situ experiments is justified
by better results. In other words, we wonder if
in-situ experiments are worth the hassle.
The article is structured as follows. First, we
detail related work. After reviewing the state-of-
the-art, our thesis is presented and our research
roadmap is explained. Then, the three experi-
ments of our roadmap are detailed. Finally, the
article presents a generalization of our results and
proposes a methodology for user evaluation of
interactive environments.
METHODS: LABORATORY
OR IN-SITU?
First, it is necessary to determine what are the
relevant elements of the interactive environment
that must be set up for the experiment. We pro-
pose to structure this set of elements into four
categories: the user (or users if a collaborative
environment is under evaluation); the devices in
Search WWH ::




Custom Search