Information Technology Reference
In-Depth Information
The significance of FlexKit for large-scale
trials can best be seen when evaluators work
with developers to adapt the system to capture
previously un-captured data, and so enable the
exploration of new ideas or research questions
raised during a system deployment. FlexKit has
its own logging layer, so when an evaluator wishes
to augment or change the logged information it
is possible for the developer to update a particu-
lar module with the new data capturing facility.
Through the FlexKit update mechanism, this
module can then be silently integrated into the
running application. The main advantage of this
approach is that it makes the evaluation of mass
participation systems much easier and does not
require evaluators to be in contact with participants
in order for updates to happen.
monly employed in performance-based systems
and games (Crabtree et al., 2004; Vogiazou et al.,
2006) allow evaluators to direct the course of the
evaluation as their research questions change in
the light of ongoing observations. We suggest
that when combined, such a collection of tech-
niques afford the adaptive evaluation of Ubicomp
systems in the wild, and open up new directions
for future work on novel tools and methods for
evaluation. And, although many of the systems
we have reviewed in chapter (whether our own or
others') are games-based, we would also argue for
the relevance of adaptive evaluation techniques
to a broad range of Ubicomp domains.
More generally, we see strong benefits for
evaluators in taking advantage of the same design
principles and technologies that we are develop-
ing for users, in terms of using wireless networks
and distributed sensors and cameras as tools
for maintaining awareness, and of building up
models of context and information with use (and
perhaps even with users). We see the potential to
make evaluation more of a synchronous engaged
experience despite the vagaries of geographic and
temporal scale, shifting context, and the work of
fitting Ubicomp evaluation into the routine of our
own everyday lives.
CONCLUSION
While we would not necessarily claim that what
evaluators did on Ego and in Hungry Yoshi—i.e.,
orchestrating the experience and modifying the
research questions on-the-fly—is radically new,
we suggest that the way they did it contributes
toward the argument that more strongly adaptive
evaluation is an appropriate strategy for overcom-
ing the kinds of control problems faced when
evaluating user experiences of large geographic
and temporal scale.
A variety of methods have been discussed that
enable the evaluator to maintain some degree of
control and connection. Adaptive journals such as
FlexiFill, like experience sampling, aim to capture
elements of the evaluation that may be neglected if
reflection is attempted only at a later date. Logged
data can be visualised post-hoc (Greenhalgh et al.,
2007; Morrison et al., 2006) or alternatively can
be streamed in real-time, thus providing a con-
tinuous awareness mechanism for the otherwise
isolated evaluator. Orchestration techniques com-
ACKNOWLEDGMENT
Several people apart from the authors worked
on the systems and evaluations described here.
In particular, we thank Louise Barkhuus, Marek
Bell, Barry Brown, Owain Brown, John Ferguson,
Malcolm Hall, Donny McMillan and Paul Tennent.
This research was funded as part of the Equator
interdisciplinary collaboration (UK EPSRCGR/
N15986/01) and continued under the Contextual
Software project (UK EPSRC EP/F035586/1)).
Search WWH ::




Custom Search