Information Technology Reference
In-Depth Information
which is called 'metaphor', focuses on the the perceptual similarity between physi-
cal objects and their digital counterparts. This similarity concerns both the physical
shape/look/sound of the object and the actions that are performed with this object.
The second axis ('embodiment') corresponds to the spatial distance between the tan-
gible object, which is used for input by the user, and the device that provides system
output. Fishkin distinguishes four levels of embodiment - ranging from feedback
which is directly provided by the tangible to feedback which is provided on a dis-
tant device. Fishkin states that, in order to generate the impression of computation
being embodied within the tangible objects, the tangible input device should also
be the output device. However, in other situations, a spatially more distant system
feedback might be more appropriate. Besides this spatial offset, Fishkin's taxonomy
does not address temporal offsets. These occur in scenarios where physical inter-
action cannot be captured or processed in real-time by the computer system. An
example is the situation in which a digital pen temporally buffers data when used
in a mobile setting before being synchronized with a computer at a later point in
time. The distance between input and output is also addressed by the framework of
Koleva et al. [66], which denotes this as the 'degree of coherence'.
Other frameworks analyze the concrete interactions that the user performs with
tangible interfaces. Interaction in most TUIs is centered on moving and arranging
physical objects. For instance, the seminal URP system [159] enables urban plan-
ners to modify a digital model of urban buildings by rearranging physical models of
these buildings. With other systems, such as the Marble Answering Machine [43] or
MediaBlocks [157], the user accesses and modifies digital information by moving
and arranging objects that act as physical handles for this information. Correspond-
ingly, theoretical approaches to interaction within TUIs conceptualize interactions
as changing the location or orientation of objects.
The framework of Ullmer and Ishii [158] classifies TUIs by the way in which
they combine multiple tangible objects. The TAC paradigm [132] states that it is
the physical constraints that define which interactions are possible (and not possi-
ble) with tangible objects in a TUI. Again, interactions are conceptualized as dis-
placements and compositions of tangible objects. These concepts do not account
for other types of interactions that alter the tangible objects themselves rather than
displacing them. Ishii and Ullmer [43] transfer a set of GUI elements to TUIs (such
as windows, icons and handles) suggesting generic physical instantiations of these
elements. The focus is again on interaction as displacements, rotations and com-
positions of objects. Interactions with an individual object have meaning only with
respect to other objects or to a reference frame. Finally, Wimmer [174] contributes
a descriptive model of meaning that is expressed by grasping a tangible object.
All these theoretical frameworks do not account for the collaborative use of TUIs
by multiple users. The framework of Hornecker and Buur [41] briefly discusses co-
located use of TUIs by pointing out that TUIs offer multiple points of interaction,
which provides for a spatially distributed control. Nevertheless, the collaborative
dimension has not been extensively analyzed in TUI models.
Search WWH ::




Custom Search