Graphics Reference
In-Depth Information
capture to allow for moving cameras. For example, Liebowitz and Carlsson [ 281 ]
extended metric reconstruction in the case where the scene points lie on a dynamic
articulated skeleton whose bone lengths are known. Hasler et al. [ 190 ] automated the
synchronization of the cameras and improved the feature detection and bodymodel.
Brand and Hertzmann [ 65 ] discussed how a database of performances of the same
action (e.g., walking) by different performers could be used to separate the style of
a motion from its content and estimate style parameters for each performer. This
approach enabled previously recorded activities to be rendered in the style of a dif-
ferent performer, as well as the generation of new styles not in the database. Liu
et al. [ 290 ] estimated biomechanical aspects of a performer's style (e.g., relative pref-
erences for using joints and muscles), assuming that the recorded motion capture
data optimizes a personal physics-based cost function.
The more extensively component motions are edited, the greater the risk the
synthesized motion appears unnatural to human eyes. Ren et al. [ 387 ] designed a
classifier trained on a database of both natural and unnatural motions that could
predict whether a new motion had a natural appearance. The classifier is based on
a hierarchical decomposition of the body into limbs and joints, so that the source of
an unnatural motion can be automatically pinpointed. In addition tomotion editing,
this approach could also be used to detect errors in raw motion capture data and to
determine markers and intervals that need to be fixed. Safonova and Hodgins [ 415 ]
proposed an algorithm that analyzed the physical correctness of a motion sequence,
taking into account linear and angular momentum and ground contact, which can
improve the appearance of interpolated motions.
Cooper et al. [ 104 ] applied an adaptive learning algorithm to direct the sequence of
actions that a motion capture performer should execute in order to efficiently build
a good library of clips for motion editing and synthesis. Kim et al. [ 239 ] discussed the
extension of motion editing to enable multiple characters to interact in a task (e.g.,
carrying objects in a relay).
The main goal of full-body motion capture is to record the geometric aspects of a
performance. However, many applications of facial motion capture require not only
the recording of facial geometry but also high-resolution facial appearance — for
example, to make an entirely convincing digital double. Alexander et al. [ 11 ] gives an
interesting overview of the evolution of photorealistic actors in feature films. They
used the Light Stage at the University of Southern California to acquire the detailed
facial geometry and reflectance of a performer, producing an incredibly lifelike facial
animation rig.
We generally assumed that motion capture data is processed for production well
after it is acquired. However, in a live setting, we may need to use motion capture
data to drive an animated character in real time, which is sometimes called computer
puppetry . This process is now commonly used on motion capture stages to allow a
director to crudely visualize themapping of an actor's performance onto an animated
character in real time, notably for movies like Avatar .
The main problem is delivering fast, reliable inverse kinematics results to drive
a rigged character at interactive rates. Shin et al. [ 443 ] described one such algo-
rithm, which makes instantaneous choices about which end-effector motions are
most important to preserve in the inverse kinematics, and leverages analytic solu-
tions for speed. Chai and Hodgins [ 85 ] showed how a performer wearing only a
Search WWH ::




Custom Search