Information Technology Reference
In-Depth Information
However, for the present work we make use of a textual notation structured accord-
ing to the interaction sentences mentioned in Section 3.1.2, “Interaction sentences.”
In the next step we could cross-compare several UIs and contrast their different ap-
proaches to designer's narration. Some would seem simple, we assume; others are
overly complex or full of possibly useless redundancies. This method could also al-
low us to compare UIs from different periods of time and compare/contrast different
epochs and their approach to UI storytelling/narration. These techniques could be
used also to compare/contrast different cultures and their approach to UI narration:
simple, direct, oriented to certain senses or manual, cognitive, emotional techniques.
As for the evaluation methods, we shall discuss the options available, as well as
our choice in the following paragraphs. From the semiotic and UI language principles
exposed so far we extracted a set of heuristics. We then developed a semiotic analysis
(SA) method, that takes as input the interaction sentence transcript with figures from
the UI. Because it is an evaluation method carried on by experts, we wanted to compare
it with a well-known method to see, whether the results would be different, and how.
There are a number of expert-evaluation methods, including cognitive walkthrough,
heuristic evaluation, expert inspection, and semiotic analysis. A comprehensive com-
parison of usability methods was done, for example, by Andre (2000), so we shall
not go into much detail here. To compare the methods of expert evaluation we chose
heuristic evaluation (HE) and semiotic analysis (SA). Our criteria for the methods
were: fast and easy to do, results accessible to nonexperts, and comparable to previous
data. The goal was to validate the SA against a nonsemiotic method. Although each
of the methods follows a given set of heuristics, we expect them to have only few
overlaps given their different emphasis.
Heuristic evaluation
The HEs considered are based on Marcus et al. (2003a); see Appendix A, “Heuristic
Evaluation.”
“Heuristic evaluation is a discount usability engineering method for quick, cheap,
and easy evaluation of a user interface design. Heuristic evaluation is the most pop-
ular of the usability inspection methods. Heuristic evaluation is done as a systematic
inspection of a user interface design for usability. The goal of heuristic evaluation is to
find the usability problems in the design so that they can be attended to as part of an it-
erative design process. Heuristic evaluation involves having a small set of evaluators
examine the interface and judge its compliance with recognized usability princi-
ples (the 'heuristics')” (Nielsen Norman Group, Heuristic Evaluation Articles and
Training, http://www.nngroup.com/topic/heuristic-evaluation ,
cit. 2013-06-02).
The primary goal of an HE is to determine appropriately detailed failures or near-
failures of usability (together with successes), measured informally against principles
of usability, with direct citation of the principles involved, together with an informal
grade of the severity of each.
We expect to gather the following evaluations and recommendations from the HE:
significant errors, significant successes, recommended improvements, and prioritized
actions.
Search WWH ::




Custom Search