Graphics Reference
In-Depth Information
Similar approaches to creating and applying motion graphs were described by Lee
et al. [ 265 ] and Arikan and Forsyth [ 17 ].
We can also impose constraints based on motion type, for example, forcing the
character to run through a given region by only allowing samples from running
motion capture sequences. For this purpose, it may be useful to automatically cluster
and annotate a large database of motion capture sequences with descriptions of the
performance (e.g., see [ 18 , 253 ]). Finally, in addition to space-time constraints, we
can impose a dynamics-based model for F
, such as evaluating the total power
consumption of a character's muscles [ 371 ] or the realism of the recovery from a
sharp impact [ 583 ].
(
W
)
7.6
FACIAL MOTION CAPTURE
Marker-basedmotion capture is primarily used to record the full body of a performer.
However, it can also be used to focus on a performer's face, for later use in driving the
expressions of an animated character. The technology andmethods formarker acqui-
sition are basically the same as for full-body motion capture, except that the cameras
are closer to the subject and the markers are smaller (i.e., 2-5 mm in diameter). Self-
occlusions and marker loss are also less problematic since the facial performance is
generally captured head-on by a smaller set of inward-facing cameras. Figure 7.19
illustrates a typical facial motion capture setup.
Facial markers aren't usually related to an underlying skeletal model as in full-
body motion capture. Instead, facial markers are commonly related to a taxonomy
of expressions called the Facial Action Coding System (FACS) , developed by Ekman
(a)
(b)
Figure 7.19. A sample facial motion capture setup. (a) The camera configuration. (b) The marker
configuration.
Search WWH ::




Custom Search