Biomedical Engineering Reference
In-Depth Information
more often, which is perhaps the reason that mini-maps and virtual compasses are
so prevalent in desktop game environments.
Given the difficulty of spatial updating in a desktop VE, it might reasonably be
expected that users would have difficulty integrating their experiences into a coherent
survey representation or cognitive map of the VE. The evidence here is less clear.
While there is evidence that people may be able to form survey representations from
desktop VEs in some cases (e.g., [ 86 , 89 , 111 ]), there are also examples of users
failing to piece together relatively simple spatial layouts after desktop navigation.
For example, Stanton et al. [ 97 ] had participants explore two opposite legs of a virtual
kite-shaped maze on a desktop VE system. The task required participants to travel
from corner 1-2 and back, and later from corner 3-4 and back without ever traveling
between these two routes. Despite repeated practice locating each of the four corners
and the presence of distinct visual landmarks with which to locate these places and
paths relative to each other, participants were unable to take novel shortcuts at above-
chance levels in subsequent testing. It is unclear whether this same result would have
been found in VEs that offer wide fields of view (such as CAVES, discussed below) or
that incorporate body-based senses (such as HMD-based systems, discussed below).
Desktop
+
Motion Controller
Many modern gaming systems and some desktop simulation systems allow the user
to interact with simulations using naturalistic motions. The best known of these
systems are perhaps the Nintendo Wii and Microsoft Kinect, both of which can
be used either on their respective game consoles or on a computer. Such systems
leverage the idea of including a user's body into the desktop simulation experi-
ence. Indeed, Microsoft marketed the Kinect system with the slogan, “You are the
controller.” These interfaces are still in their infancy, making it hard to draw firm
conclusions about the impact that they have on the user's ability to form accurate
spatial knowledge. There is also substantial variation in the way that users interact
with various devices both between platforms and between different simulations on
the same platform. For example, different simulations of the Wii might use gestures
from one of the user's hands, posture information from a balance board, traditional
button-presses, or some combination of the above. Similarly, the Kinect performs
full-body tracking of the user that could be directly mapped onto the movements of
one's avatar, processed to extract pre-defined control gestures, or merely recorded
and post-processed. Other accessories, like the Wii balance board, Dance Revolution
dance pad, or even the early Nintendo Power Pad could be leveraged to implement
a walking-in-place navigation interface.
While it is difficult to make specific claims about the impact of motion controls on
spatial knowledge, there are some generalities that can be drawn from these types of
interaction. First, including one's body into the user interface of a desktop simulation
should improve spatial sensing and navigation insomuch as the movements pertain
to the user's navigation. For example, leaning left to steer leftward or to initiate a
turn provides more accurate efferent information about leftward movement than,
Search WWH ::




Custom Search