Information Technology Reference
In-Depth Information
and whether it is easy to learn and use. These notions can very well be applied
to MMD as well as the ergonomic criteria identified for MMI: social
acceptability (socially unacceptable systems would, for example, be systems
asking the users nosy questions) and practical acceptability (production, cost
and trust constraints). Acceptability is also the field in which we find
usability, with criteria such as easy to learn, to memorize, few errors, ability
to guide the user, work load management, etc.
Beyond these criteria, the field of MMI offers a set of methods that can be
combined to lead to a relevant assessment: representative user opinion
analysis, representative user activity analysis (video recordings, observation,
use of an eye tracker, physiological measurements), expert judgment,
assessment grids covering the lists of qualities for a good system. If these
methods can only be used in the case of a running system, others can be
considered even at the stage of the system design: expert judgments,
theoretical modeling of the interaction (analytical approaches: predictive
formal models, quality models and software models). Finally, a third set of
methods is dedicated to prior assessment, that is as early as the specification
phases, for example, by taking into account the importance of human factors
when designing a system or by following the principles of cognitive
engineering, especially that which aims to replace the user at the heart of each
specification and design stage.
A lot of these methods find no equivalent in MMD. This is the case, for
example, of analytical approaches that aim to formally model the user's
behavior: if certain behaviors, when faced with an MMI, can be predictable
and can be formalized, the issue is more complex in MMD. For that, we only
need to look at all the previous chapters. Some methods, however, are easily
usable and materialized even more precisely in MMD. For example, when it
comes to the classic method of expert judgment, Gibbon et al. [GIB 00] show,
for example, that in MMD, at least three experts are required to identify
almost half of the usability problems. The higher the number of experts
involved, the higher the number of problems identified, but the more the
assessment is costly in time and human resources. More recently, [KÜH 12]
presents a set of methods, including not only an expert group test, but also the
method of cognitive walkthrough, which consists first and foremost of an
analysis of the task by decomposing it into actions: at least one expert follows
a path determined to be optimal to solve the task, and checks at each stage
that the following task is accessible for any novice user.
Search WWH ::




Custom Search