Image Processing Reference
In-Depth Information
7.2 Face-Tracking Experiments
The proposed system was writen in Visual C++ and implemented on a personal computer
with an Intel Core i3 3.1 GHz CPU and Windows 7 operating system. The proposed detection
system took 0.516 (s) for an image measuring 720 × 576 pixels. During the tracking process, the
detection window size was 200 × 200 pixels, and the detection time was about 0.08 s if only the
detection operation was conducted. The real-time tracking system uses a SONY CCD camera
to capture images, and each image measured 720 × 576 pixels. At the start, a face was detec-
ted from the whole captured image, and subsequently, a face was detected within a predicted
search region measuring 200 × 200 pixels.
7.2.1 Experiment 1
In this experiment, we tested the quality of the proposed face-tracking system on the standard
IIT-NRC facial video database compared with the incremental visual tracker (IVT). This data-
base contains short videos that show large changes in facial expression and orientation of the
users taken from a web-cam placed on the computer monitor. In Figure 13 , we have illustrated
the tracking results of our approach and IVT on the IIT-NRC facial video database. We noted
that our method, unlike the IVT approach, is capable of tracking the target presenting a pose
(190, 206), expression (89, 104), and size (89) variation, and maintaining the size of the face de-
tected, which allows the use of the frames tracked in the recognition.
FIGURE 13 Results of our approach and IVT tracker on the IIT-NRC facial video database.
7.2.2 Experiment 2
In this experiment, we tested the quality of the proposed face-tracking system on a set of
500 video clips collected from YouTube. The frame size ranged from (320 × 240) to (240 × 180)
pixels. Despite the heavy rate of noise in the video used, mostly due to the low resolution and
high compression rate, our tracker successfully tracked 90 % of the video clips. In Figure 14 ,
we have presented examples of well-tracked videos. The level of performance obtained by our
tracker is more than satisfactory by taking into account the low quality and high variability in
the data tested.
FIGURE 14 Face-tracking results on YouTube video clips.
7.2.3 Experiment 3
Figure 15 shows the tracking results for a succession of captured frames. These results show
that, the proposed system can correctly track a subject under various complex motion and par-
tial occlusion. By tracing the center point of the face detected in each frame, we obtained a
motion signature as shown in Figure 16 . Such motion signatures can be used to characterize
human activities.
 
 
Search WWH ::




Custom Search