Biomedical Engineering Reference
In-Depth Information
that the noise attributable to individual estimates is independent and Gaussian, then
an estimate with the lowest variance is obtained using Maximum Likelihood Esti-
mation (MLE) [ 35 ]. MLE models have three general characteristics. First, infor-
mation from two or more sensory modalities is combined using a weighted aver-
age. Second, the corresponding weights are based on the relative reliabilities of the
unisensory cues (i.e., the inverse of their variances); the cue with the lowest uni-
modal variance will be weighted highest when the cues are combined. Third, as a
consequence of integration, the variance of the integrated estimate will be lower
than those observed in either of the individual estimates. There is now mounting
evidence that humans combine information from across the senses in such a “statis-
tically optimal” manner (e.g., [ 37 ]). Most of this work has been aimed at modeling
cue integration between the exteroceptive senses such as vision, haptics, and hearing
[ 2 , 4 , 12 , 35 , 36 ], or within the visuomotor system (e.g., [ 63 , 66 ]), but very few stud-
ies have considered whether the same predictions apply to multisensory self-motion
perception.
The Bayesian perspective is now just starting to be considered in the field of human
locomotion (e.g., [ 25 ]), and self-motion in particular [ 18 , 19 , 21 , 23 , 39 , 42 ]. For
instance, a study by Campos et al. [ 23 ] highlights the dynamic nature in which optic
flow and body-based cues are integrated during walking in the real world. The study
shows that the notion of optic flow as an all-inclusive solution to self-motion per-
ception [ 46 ] is too simplistic. In fact, when body-based cues (e.g. proprioceptive and
vestibular inputs) are available during natural walking they can dominate over visual
inputs in dynamic spatial tasks that require the integration of information over space
and time (see also [ 21 ] for supporting evidence in VR). Other studies have attempted
to look at body-based cues in isolation and investigate how these individual sources
interact with visual information. For instance, a number of studies have considered
the integration of optic flow and vestibular information for different aspects of self-
motion perception (e.g., [ 19 , 39 , 40 , 51 , 61 ]). Evidence from both humans [ 18 , 39 ],
seealso[ 69 ]) and non-human primates [ 40 , 49 ] shows that visual-vestibular inte-
gration is statistically optimal when making heading judgments. This is reflected by
a reported reduction in variance during combined cue conditions, compared to the
response patterns when either cue is available alone. Interestingly, when the visual
signal lacks stereoscopic information, visual-vestibular integration may no longer be
optimal for many observers [ 19 ]. To date, the work on visual-vestibular interactions
has been the most advanced with respect to cue integration during self-motion in the
sense that it has allowed for careful quantitative predictions. Studies on the combina-
tions of other modalities during self-motion perception have also started to provide
qualitative evidence that support the MLE. For instance, Sun et al. [ 102 ], looked at
the relative contributions of optic flow information and proprioceptive information to
human performance on relative path length estimation (see also [ 103 ]). They found
evidence for a weighted averaging of the two sources, but also that the availability
of proprioceptive information increased the accuracy of relative path length estima-
tion based on visual cues. These results are supported by a VR study [ 21 ] which
demonstrated a higher influence of body-based cues (proprioceptive and vestibular)
when estimating walked distances and a higher influence of visual cues during pas-
Search WWH ::




Custom Search