Information Technology Reference
In-Depth Information
The Human Monitoring Interface is then capable not only of a simple mon-
itoring the area, but also querying monitored objects based on their previous
occurrences, visual properties and behavior. The behavior is derived from an ob-
ject's trajectory, its interactions with the environment and mutual interactions
based on statistical and data mining methods. This is illustrated in figure 1b.
3 Computer Vision Techniques
There are two major spheres we would like to evaluate - computer vision and
surveillance information retrieval. The computer vision part is further divided
in the object tracking, feature extraction and 3D calibration as illustrated in
figure 2.
The computer vision is a broad but still underdeveloped area summarized by
Sonka, Hlavac and Boyle in [14]. We concern on visual surveillance methods,
especially on distributed surveillance systems, reviewed by Valera and Velastin
[15] and CARETEKER deliverables [4].
The 3D camera calibration [14] is an optional technique in the IR based ap-
proach, when an exact 3D calibration is required, we use CARETAKER's Kali-
broU - a camera calibration program, based on Tsai's method [4]. Thus we
concentrate more on tracking, feature extraction and object recognition.
3.1 Object Tracking
Object tracking [14] is a complex problem and it is hard to make it working well, in
real (crowded) scenes as illustrated in figure 3. Discussed approach is based mainly
on proved methods of object tracking implemented in the Open Computer Vision
Library [8]. The tracking process is illustrated in figure 2. Background is modeled
using Gaussian Mixture Models [8] as an average value of color in each pixel of
video and the foreground is a value different to the background. We have been
inspired by the approach developed by Carmona et al. [3].
Foreground is derived from background, which is modeled using Gaussian
Mixture Models [8] as an average value of color in each pixel of video and the
foreground is a value different to the background based on segmentation of the
color in RGB color space into background, foreground and noise (reflection,
shadow, ghost and fluctuation) using a color difference Angle-Mod cone with
vertex located in the beginning of the RGB coordinate system. In this way, the
illumination can be separated from the color more easily.
The other two modules - blob entrance and tracking are standard OpenCV
Blobtrack functions [8]. The blob entrance detection tracks connected compo-
nents of the foreground mask. The Blob tracking algorithm is based again on
connected components tracking and Particle filtering based on Means-shift re-
solver for collisions. There is also a trajectory refinement using Kalman filter as
described in section 4.
The trajectory generation module has been completely rewritten to add the
feature extraction and TCP/IP network communication capability. The protocol
is based on XML similarly to MPEG-7 [9]. The objects' ID and trajectory is in
this way delivered to a defined IP address and service (port 903).
 
Search WWH ::




Custom Search