Information Technology Reference
In-Depth Information
selecting a widget or command. The action is manifested by mouse-down (click) on
a “go” button or menu item, releasing the mouse over the command, or dragging. As
Marcus (2003a) holds, “All widgets are about some selection/indication of intent”
(Ibid., 4.4.8). Any visual element in the UI can thus start a sentence and continue
a dialogue with the user. According to Foley and Wallace the essential features of
the sentence structure are: “indivisible, complete thought; unbroken actions; a well-
defined 'home state'; regularity of pattern” (Foley and Wallace, 1974, p. 465). They
provide examples of such sentences, such as:
Draw a line from this point to that one. Apply this constraint to that object. Rotate this
object about that axis by the following amount.
(Foley and Wallace, 1974)
The sentences should match user's natural language (mental model) as closely as
possible for the interaction to be natural (“intuitive”) and effective. The level of clarity
can be assessed during evaluation in the form of think-aloud, when the user utterances
are analyzed and categorized in order of relevance and priority (i.e., how well, if at
all, the designer's intended meaning is transferred to the user, as explained by de
Souza, 2005), or checked for consistency (syntax) and meaningfulness (semantics).
Although the main focus here is on visible and interaction syntax, we should seek the
best possible syntax-semantics alignment.
Interaction sentences, in order to match user's mental models and allow for a natu-
ral flow of interaction, should follow “a number of syntactic principles of naturalness
for action sequences” (Foley and Wallace, 1974, p. 465) besides the aforementioned
sentence structures, also “visual continuity, tactile continuity, and contextual conti-
nuity” (Ibid.).
According to Brandt (1993, p. 138), “the flow becomes an active, transcendental
principle, an instance that carries a kind of general state of belief that cannot be stated
explicitly, but only shows its effects in the work that the semiotic flow yields at the
separate stations.”
From a semiotic analysis perspective, an interesting possibility is extracting the
possible interaction sentences from the UI, and by doing so, being able to evaluate the
UI. This idea is based on the concept, that “language provides facilities for controlling
the information structure of the sentence: theme/ rheme, old/new information, and
background/focus” (Andersen, 2001, p. 6). Andersen even believes that a series of
sentences can be translated into a UI form, thus helping to create the UI.
There are already different kinds of user-interaction notations that have been used
since the 1980s (e.g., Backhus-Naur Format annotation, or BNF; Task-Action Gram-
mar, or TAG—Payne and Green, 1986; eTAG (de Haan, 2000); notation based on
object-oriented programming—Andersen, 1997; adapted Kats and Fodor model from
Eco (1979) in O'Neill (2002)). These, however, produce a pseudoprogramming code,
which often provides too much detail, is hard to read, and does not scale well for
more complex UIs.
In the context of HCI, a similar concept to interaction sentences is a scenario.
According to Carroll (2000, p. 46), “[s]cenarios are stories—stories about people
and their activities.” The interaction sentences share with the scenarios their incom-
pleteness. As Carroll (2000) says, “Scenario descriptions are fundamentally heuristic;
Search WWH ::




Custom Search