Information Technology Reference
In-Depth Information
Localization and Filtering Fusion: This level fuses images obtained in lo-
calization and filtering stage as there might be several localization and filtering
approaches running in the framework (e.g. one devoted to color images and an-
other to IR images). Thus, this level seeks for the most benefic features from the
input images.
Blob Detection: The blob detection level filters isolated spots misdetected in
the previous levels. Besides, the blob detection level is in charge of extracting
information associated to the spots to allow a more e cient analysis of the
objects. This information is application-dependent.
Object Identification: This level operates with objects instead of blobs. This
enhances the information abstraction, mapping object coordinates into the real
world instead of simply operating with image coordinates.
Object Classification: This level is specially important to perform a good
activity analysis because it provides knowledge about “what” the object is. Also,
object classification may provide information about the objects' orientation.
Object Tracking: This level is in charge of mapping the image objects' coor-
dinates into the real map. Thus, it calculates the trajectories followed by the
moving objects within the scenario, independently of the particular sensor that
detected them. It also makes predictions about future positions of the objects
on the basis of the previously detected trajectories. This level uses the informa-
tion from the common model referring to the map, the sensors situation and its
coverage range.
Event Detection: The event detection level generates semantic information
related to the behavior of the objects in the scenario. These events are considered
instantaneous, that is, they are time independent. Some examples are events such
as running, walking or falling, which can be detected with just one or at most
a few input images. This is the last layer of the framework held within remote
nodes (see Fig. 2). The next layers are implemented in the central node together
with the common model and the central controller and view.
Event Fusion: In a multisensory monitoring and interpretation system, where
several sensors monitor a common scenario, the events generated from different
sources usually do not match. This is why the event fusion level is necessary to
unify the information arriving from the different sensory data generated in the
previous level.
Activity Detection: This final level of the architecture is in charge of the
analysis and detection of activities already associated to temporal features. After
event fusion, the current level has a better knowledge of what is happening in
the scenario according to the detected events. Hence, the activities detected at
this level can be translated into actions along the scenario, providing a higher
abstraction level.
 
Search WWH ::




Custom Search