Graphics Reference
In-Depth Information
selected a set of training facial shapes with known MUPs. In a key-frame based
animation system, these training shapes can be represented by linear combi-
nations of key frames. Based on the equality between the two representations
of the training shapes, the conversion between parameters of key-frame com-
bination and MUPs could be derived as described in [Hong et al., 2002]. This
method enabled us to use MUs for animation in a traditional key-frame-based
animation system, such as iFACE. However, key frames of a certain system
may not be expressive enough to take advantage of the motion details in MUs.
Thus, the facial deformation information can be lost during conversion between
parameters of key-frame combination and MUPs. Alternatively, interpolation-
based techniques for re-targeting animation to new models, such as [Noh and
Neumann, 2001], could be used for MU fitting. In similar spirit to [Noh and
Neumann, 2001], we design our MU fitting as a two-step process: (1) face ge-
ometry based MU-adjustment; and (2) MU re-sampling. These two steps can be
improved in a systematic way if enough MU sets are collected. For example, if
MU statistics over a large set of different face geometries are available, one can
systematically derive the geometry-to-MU mapping using machine-learning
techniques. On the other hand, If multiple MU sets are available, which sample
different positions of the same face, it is possible to combine them to increase
the spatial resolution of MU because markers in MU are usually sparser than
face geometry mesh.
The first step adjusts MUs to a face model with different geometry. The fun-
damental problem is to find a mapping from face geometry to MUs. Currently
no data are available yet for MU statistics over different race geometry. We
assume that the corresponding positions of the two faces have the same motion
characteristics. Then, the adjustment is done by moving the markers of the
learned MUs to their corresponding positions on the new face. We interactively
build the correspondence of facial feature points shown in Figure 3.7(c) by la-
belling them via a GUI. Then, image warping technique is used to interpolate
the correspondence in the remaining part. Note that the correspondences are
based on only 2D facial feature locations, because only one image of a face is
used in the GUI. We are working on using automatic facial feature localization
techniques (e.g. [Hu et al., 2004]) to automate this step.
The second step is to derive movements of facial surface points that are not
sampled by markers in MUs. This is essentially a signal re-sampling problem,
for which an interpolation-based method is usually used. We use the popular
radial basis interpolation function. The family of radial basis Junctions (RBF)
is well known for its powerful interpolation capability. RBF is widely used
in face model fitting [Pighin et al., 1998] and face animation [Guenter et al.,
1998, Marschner et al., 2000, Noh and Neumann, 2001]. Using RBF, the
Search WWH ::




Custom Search