Graphics Reference
In-Depth Information
them in facial motion analysis. In a facial expression experiment using CMU
Cohn-Kanade database [Kanade et al., 2000], we show that the the novel ap-
pearance features can deal with motion details in a less illumination dependent
and person-dependent way [Wen and Huang, 2003]. In face synthesis, the flex-
ible appearance model enables us to transfer motion details and lighting effects
from one person to another [Wen et al., 2003]. Therefore, the appearance model
constructed in one conditions can be extended to other conditions. Synthesis
examples show the effectiveness of the approach.
2.5 Applications of face processing framework
3D face processing techniques have many applications ranging from intel-
ligent human computer interaction to smart video surveillance. In this topic,
besides face processing techniques we will discuss applications of our 3D face
processing framework to demonstrate the effectiveness of the framework.
The first application is model-based very low bit-rate face video coding.
Nowadays Internet has become an important part of people's daily life. In the
current highly heterogeneous network environments, a wide range of bandwidth
is possible. Provisioning for good video quality at very low bit rates is an
important yet challenging problem. One alternative approach to the traditional
waveform-based video coding techniques is the model-based coding approach.
In the emerging Motion Picture Experts Group 4 (MPEG-4) standard, a model-
based coding standard has been established for face video. The idea is to create
a 3D face model and encode the variations of the video as parameters of the 3D
model. Initially the sender sends the model to the receiver. After that, the sender
extracts the motion parameters of the face model in the incoming face video.
These motion parameters can be transmitted to the receiver under very low
bit-rate. Then the receiver can synthesize corresponding face animation using
the motion parameters. However, in most existing approaches following the
MPEG-4 face animation standard, the residual is not sent so that the synthesized
face image could be very different from the original image. In this topic, we
propose a hybrid approach to solve this problem. On one hand, we use our 3D
face tracking to extract motion parameters for model-based video coding. On
the other hand, we use the waveform-based video coder to encode the residual
and background. In this way, the difference between the reconstructed frame
and the original frame is bounded and can be controlled. The experimental
results show that our hybrid deliver better performance under very low bit-rate
than the state-of-the-art waveform-based video codec.
The second application is to use face processing techniques in an integrated
human computer interaction environment. In this project the goal is to con-
tribute to the development of a human-computer interaction environment in
which the computer detects and tracks the user's emotional, motivational, cog-
nitive and task states, and initiates communications based on this knowledge,
Search WWH ::




Custom Search