Game Development Reference
In-Depth Information
movements. Sometimes, hand postures act as special transition states in tempo-
ral gestures and supply a cue to segment and recognize temporal hand gestures.
In certain applications, continuous gesture recognition is required and, as a result,
the temporal aspect of gestures must be investigated. Some temporal gestures
are specific or simple and could be captured by low-detail dynamic models.
However, many high detail activities have to be represented by more complex
gesture semantics, so modeling the low-level dynamics is insufficient. The HMM
(Hidden Markov Model) technique (Bregler, 1997) and its variations (Darrell &
Pentland, 1996) are often employed in modeling, learning, and recognition of
temporal signals. Because many temporal gestures involve motion trajectories
and hand postures, they are more complex than speech signals. Finding a suitable
approach to model hand gestures is still an open research problem.
Facial Expression Analysis
Facial Features Relevant to Expression Analysis
Facial analysis includes a number of processing steps that attempt to detect or
track the face, to locate characteristic facial regions such as eyes, mouth and
nose, to extract and follow the movement of facial features, such as character-
istic points in these regions or model facial gestures using anatomic information
about the face.
Although FAPs provide all the necessary elements for MPEG-4 compatible
animation, they cannot be directly used for the analysis of expressions from video
sequences, due to the absence of a clear quantitative definition framework. In
order to measure FAPs in real image sequences, we have to define a mapping
between them and the movement of specific FDP feature points (FPs), which
correspond to salient points on the human face.
Table 1 provides the quantitative modeling of FAPs that we have implemented
using the features labeled as f i ( i=1..15 ) (Karpouzis, Tsapatsoulis & Kollias,
2000). This feature set employs feature points that lie in the facial area and can
be automatically detected and tracked. It consists of distances, noted as s(x,y) ,
between protuberant points, x and y , corresponding to the Feature Points shown
in Figure 2. Some of these points are constant during expressions and can be used
as reference points. Distances between these points are used for normalization
purposes (Raouzaiou, Tsapatsoulis, Karpouzis & Kollias, 2002).
Search WWH ::




Custom Search