Graphics Reference
In-Depth Information
the context of motion capture, Ganapathi et al. [ 162 ] fit a full-body skinned kinematic
model to a stream of monocular depth images from a time-of-flight sensor in real
time. Since the sensor observations are directly comparable to themodel surface, the
observation likelihood is relatively straightforward and is based on the noise model
for the sensor. However, the proposed inference algorithm produced errors with
respect to conventional motion capture markers that were still fairly high.
Amore familiar consumer technology is the Kinect sensor introducedbyMicrosoft
in 2010 as a new game-controlling interface for the Xbox 360, which uses a infrared
structured-light-based sensor to produce a stream of monocular depth images. As
described by Shotton et al. [ 445 ], hundreds of thousands of training images (both
motion-captured and synthetic) were used to build a finely tuned classifier that maps
each depth image to a set of candidate joint locations. The offline learning process is
incredibly computationally intensive, but the resulting online classifier is extremely
fast and can be hard coded into the device. The system is impressive for its ability to
robustly succeed across a wide range of body types and environmental conditions in
real time, though the goal is general pose estimation and not highly accurate motion
capture (the depth sensor only has an accuracy of a few centimeters).
7.8
INDUSTRY PERSPECTIVES
Senior software engineers Nick Apostoloff and Geoff Wedig from Digital Domain in
Venice, California discuss the role of body and facial motion capture in visual effects.
Digital Domain is particularly well known for creating photo-realistic digital doubles
using facial motion capture, as in the movies The Curious Case of Benjamin Button
and TRON: Legacy .
RJR: The popular perception is that motion capture directly drives the performance of
an animated character. Can you comment on how accurate this perception is?
Apostoloff: For early all-digital characters, like Gollum in the Lord of the Rings trilogy,
mocap was completely used as reference material for the animators; it didn't directly
drive the character at all. Today, we're gettingmuch closer to applyingmocap directly
to animation. No company ever shows you how far you actually get, but I think we're
at the point where, toward the end of a production, you get seventy to eighty percent
of the character motion frommocap. In some cases, the animator's just touching up
data you get back from the mocap. In other cases, there's lots of art direction that
happens after the actual shoot. It's common that the actor will do something the
director's not happy with looking at the data after the capture session, so they'll have
to reanimate a lot of that. You might only use timing information from the mocap —
for example, making sure the jaw moves at the right time — but they're going to
animate a lot on top of that. There are certain things like eyes that they do from
scratch all the time, just because eyelines change whenever a person moves around
in a scene. We don't even bother capturing that here at the moment.
It also comes back to the complexity of the animation rig and themapping fromthe
motion capture data onto the animation controls. For TRON: Legacy , both the rig and
the motion capture were modeled as two different linear systems, so that mapping
Search WWH ::




Custom Search