Graphics Reference
In-Depth Information
RJR: What kinds of problems arise when mapping mocap data onto an animated
character with a different body size and shape?
Apostoloff: A big problem is scale differences between the actors and characters. If
you have a constant scale change among all of your actors and the characters — say
you've got actors playing giants and you're going to scale all of them four times —
they usually behave well in the same environment. But say you have one actor who's
being scaled by a factor of 1.2 to make their character bigger and you have another
actor who's being scaled by a factor of 0.8 to make their character smaller. You can
record them together on the motion capture stage, for example approaching each
other and shaking hands. But when you put them into the virtual environment and
scale everything, they no longer meet up and connect at the same point. This is a
huge issue in mocap.
In particular, you may record an actor on the motion capture stage walking
across the room and interacting with different props. If you scale an actor's mocap
data down to make a smaller character, they don't actually make it to the other
side of the room! You have to introduce extra gait or strides somewhere for them
to make it across. That's a fairly big problem and the process for fixing it is
usually quite manual. Often you capture a lot of generic background motion of
actors so you can then insert these bits into scenes to fix them. I don't think peo-
ple often use things like automatic gait generation to fix these kinds of issues,
since that involves a different simulation pipeline that's difficult to insert into a
production.
RJR: Can you describe your system for facial motion capture?
Wedig: We have a head-mounted camera system that uses four grayscale cameras
on carbon fiber rods that sit around the jaw line. You get very good pictures of the
mouth and jaw area. If we were to put the cameras higher up, they would interact
with the actor's eye line, which some actors find very distracting. By putting the
cameras in a flat sort of curve around the bottom of the head we can be sure that
any part of the face is seen by at least two cameras. We use the images from the four
cameras to individually track dots on the actor's face, which are placed in a specific
pseudo-random pattern developed over several years. Initially the dots are applied
by hand, and afterward we use a plastic mask with holes drilled in it to try to get some
consistency across shots. We can't put the dots very close to the actor's lips because
they get messed up when they eat or wipe their face.
Matching the marker set on a given shot with our canonical marker set is still a
problem we have to write a lot of code to address. Another big issue is stabilization.
No matter how good the carbon fiber rods are, they're going to bounce. They bounce
when the actor breathes, when they walk, or when theymove their head. Filtering the
unwanted motion is very difficult.
Finally, we map the motion of the dots onto the actor-specific face model created
from training data described earlier.
Search WWH ::




Custom Search