Graphics Reference
In-Depth Information
we only need to capture data about face with mainly forehead motion, and learn
parts-based MUs from the data. In face animation, people often want to animate
local region separately. This task can be easily achieved by adjusting MUPs
of parts-based MUs separately. In face tracking, such as the system described
in Chapter 4, people may use parts-based MUs to track only region of their
interests (e.g. the lips). Furthermore, tracking using parts-based MUs is more
robust because local error will not affect distant regions.
5. Animate Arbitrary Mesh Using MU
The learned MUs are based the motion capture data of particular subjects. To
use the MUs for other people, they need to be fitted to the new face geometry.
Moreover, the MUs only sample the facial surface motion at the position of the
markers. The movements at other places need to be interpolated. We call this
process “MU” fitting.
In our framework, we use the face models generated by “iFACE” for MU-
based face animation. “iFACE” is a face modeling and animation system de-
veloped in [Hong et al., 2001a]. The generic face model in iFACE is shown in
Figure 3.7(a). Figure 3.7(b) shows a personalized model, which we customize
based on the Cyberware TM scanner data for that person. Figure 3.7(c) shows
the feature points we define on the iFACE generic model, which we use for MU
fitting.
Figure 3.7. (a): The generic model in iFACE. (b): A personalized face model based on the
Cyberware TM scanner data. (c): The feature points defined on generic model.
In our previous work [Hong et al., 2002], we used MUs to animate models
generated by iFACE. We dealt with the MU fitting problem by constructing a
mapping between MUs and the face deformation model of iFACE. This tech-
nique allowed a key-frame based face animation system to use MUs. First we
Search WWH ::




Custom Search