Information Technology Reference
In-Depth Information
Blob Detection: This level operates with blobs instead of sensor measures.
The detection process uses information from the localized and filtered images
to obtain the blobs contained. As some kinds of sensors provide distance infor-
mation (e.g. range sensors), blob coordinates are defined with six components
( x image , y image , z image , width image , height image , depth image ), even though depth
components ( z image , depth image ) might be void. During the blob detection pro-
cess, other parameters defining the spots, such as the contour, brightness or
color information may also be extracted. On the other hand, heuristic methods
for detection can be applied to discard wrong spots. Again, time information
and a unique identifier are associated to each blob.
Object Identification: At this level, object-level information is obtained from
the blobs information. This level takes into account the history of the objects in
the scene, that is, object features are updated along time by comparing blob in-
formation from current and previous iterations. Thanks to the knowledge about
the history of an object, it is possible to calculate parameters derived from its
motion (e.g. direction and speed). Also, information regarding corners or invari-
ant points can be added to the object's definition, depending on the application
needs.
Object Classification: The classification level uses contour information for
clipping the images (acquisition or fusion level) as inputs to the classifier. This
way, the classification algorithm provides information about what the object (its
class) is. Classification methods can also obtain the object's orientation, which is
useful for the activities detection. This level utilizes as input the acquired images
as well as the information generated from the previous level (contours, invariant
points, and so on).
Object Tracking: This level is in charge of calculating the trajectories followed
by the moving objects in the scene. Previously it is necessary to calculate the
objects' positions in the real world's map. On the other hand, a monitoring and
interpretation system must keep tracking the objects, independently of which
sensor is detecting it. For that reason, the calculated trajectories must be inde-
pendent from the sensors (if possible).
Event Detection: This level translates the information coming from the lower
levels into semantic primitives to model the relationships involving the objects
present in the scenario. Thus, the inputs are the objects tracked by the previous
levels and the sensor data whilst the output is closely linked to the event de-
tection algorithm (HMM, SVM, Bayesian networks, etc.). Anyway, it is possible
to define a common event representation format by using the flexibility of XML
abstractions (see Table 1). XML provides an open structure, able to wrap all
proposal outputs, homogenizing them for use by upper layers. This representa-
tion also allows managing the probability associated to the events. Again, a time
parameter is associated to the events to simplify the event fusion process in the
next level.
 
Search WWH ::




Custom Search