Game Development Reference
In-Depth Information
V ˆ d using
template-matching-based optical flow. The linear system is solved using the least
squares method. A multi-resolution framework is used for efficiency and
robustness.
In the original system, is manually designed using Bezier volume and represented
by the displacements of vertices of face surface mesh. To derive L from the
learned MUs, the “MU adaptation” process described earlier is used. In the
current system, we use the holistic MUs. Parts-based MUs could be used if a
certain local region is the focus of interest, such as the lips in lip-reading. The
system is implemented to run on a 2.2 GHz Pentium 4 processor with 2GB
memory. The image size of the input video is 640
can be derived (see details in Tao (1998)). The system estimates
D
480. The system works at
14 Hz for non-rigid face tracking. The tracking results, i.e., the coefficients of
MUs, R and T can be directly used to animated face models. Figure 8 shows
some typical frames that were tracked, along with the animated face model to
×
Figure 8. Typical tracked frames and corresponding animated face models.
(a) The input frames; (b) The tracking results visualized by yellow mesh; (c)
The front views of the face model animated using tracking results; (d) The
side views of the face model animated using tracking results. In each row,
the first image corresponds to neutral face.
(a)
(b)
(c)
(d)
Search WWH ::




Custom Search