Biomedical Engineering Reference
In-Depth Information
of heading relative to objects in the scene. To compensate for this limitation, patients
employ an active scanning strategy in which they make a rapid sequence of fixations
between objects, the floor ahead, and other features of the layout (e.g., [ 58 ]). This
is different from normally sighted individuals who tend to focus their gaze in the
direction of heading or toward the current goal. While an active scanning strategy
may improve the perception of heading with respect to a known object, its effect on
the detection of stationary and moving obstacles and the likelihood of collisions is
unknown. For this reason, different assessment and training interventions are needed
to understand the cost-benefit tradeoff of such a strategy and to develop new or
improved strategies for enhanced mobility safety.
Given the nature of VR as a safe testing and learning environment, a group of
researchers at the Schepens Eye Research Institute (Boston, MA) have conducted a
pair of experiments with two specific objectives: they assessed VR as a viable tool
for studying the mobility of patients with PFL and they explored the viability of
studying visual-motor learning in surrogate patients by simulating PFL in normally-
sighted participants [ 1 , 33 ]. Apfelbaum et al. [ 1 ] examined the influence of different
approach angles to a virtual obstacle on perceptual judgments of whether their path
would pass to the right or left of the obstacle. The experimental setup consisted of
a human-driven treadmill facing a projection screen displaying a passive VR model
of a local shopping mall (i.e., not coupled to participant's eye or head positions).
Patients with PFL (the mean field of view was equal to 5
9 for the patient group)
and control participants with an artificially reduced field of view (matched to the
patient group) either passively viewed or actively walked while viewing the display
(in passive viewing patients remained standing as the virtual environment moved).
In this experiment all participants viewed the virtual environment monocularly while
they approached the obstacle at different heading angles (ranging from4 to 20 , with
0 representing a straight on approach). Both the control participants and the patients
with PFL were equally accurate in their judgments and made judgments at similar
distances from the obstacle. Additionally, when patients approached the obstacle at
small angles while walking their accuracy increased, in contrast to an opposite pattern
of results from the control participants. Both groups delayed their responses when
walking until they were closer to the virtual obstacle than in passive viewing, suggest-
ing that a walking-based VR interface might be important for evoking perceptually
guided behavior that generalizes to the real world [ 1 ]. We are currently collaborating
with the Schepens group to investigate the detection and avoidance of stationary and
moving obstacles by PFL patients during overground walking in immersive VR [ 25 ].
Luo et al. [ 33 ] continued this line of research while employing the Multiplexing
Vision Rehabilitation Device (cf. [ 41 ]). 6 Using the same experimental set-up as the
previous experiment, participants interacted with the virtual environment through
.
6 The Multiplexing Vision Rehabilitation Device is an augmented reality device in which the
user wears a see-through head-mounted display (HMD) with a 25 field of view to which a small
monochrome video camera has been attached. When wearing the device the user not only sees the
real world in full resolution, but also sees real-time edge detection from a field of view between 75
and 100 , minified and displayed on the smaller field of view provided by the HMD [ 41 ].
Search WWH ::




Custom Search