Information Technology Reference
In-Depth Information
4 Definition of the Common Model
After describing the framework levels, it is time to simplify the information ex-
change among them. For this reason, a new and fundamental layer is considered.
This layer, known as the common model, gathers all the information from the
different levels while providing primitives to access the information. The com-
mon model introduced is a variation of the traditional model of the MVC, where
the algorithm of each module processes the information - always under the con-
troller's management. Thereby, the common model is only in charge of holding
the common information to be accesses by every execution module. For this
purpose, the primitives that allow managing the data are provided. To properly
define the common model, we will start with the layers that compose the ar-
chitecture. Since the input and output parameters are known, it is possible to
estimate which of them belongtothecommonmodel.
Acquisition: This level obtains data from diverse sources (camera, sensors,
database, etc.) which determine the nature of the parameters contributing to
the common model. In first place, the common model counts on a list of im-
ages captured from the cameras, regardless of the subjacent camera technologies
(IR, color, range, etc.), adapted to the application requirements. A time param-
eter (ID TIME) is associated to each image to ensure a correct synchroniza-
tion. Moreover, sensor readings are also included in the common model as XML
structures (again with an associated time parameter).
Sensor Fusion: As the most common parameters in a monitoring and interpre-
tation system are images, this level contributes to the common model with a list
of new fused images, independent from the acquisition image list. The content of
this list depends on the fusion algorithms implemented . Again, time information
is associated to fused images. Fusion of non-visual sensory data is open due to
their high dependence from the sensor technology and the application.
Localization and Filtering: In literature, it is common to mix two terms
when talking about segmentation [3,17]: Firstly, there is localization and filter-
ing defined as the process whereby, from an input image, a set of spots containing
the objects of interest are isolated. And, secondly, there is the blobs detection
process. Clearly, this level receives images as input coming either from acqui-
sition level or from sensor fusion level. Output parameters are a list of images
containing isolated regions. These regions are highlighted from the background
(white foreground on black background). Again, it is important to provide time
information to the images. Thus, a time field (ID TIME) is again attached to
the images.
Localization and Filtering Fusion: This level fuses the information coming
from the previous one. This is because several algorithms can be incorporated
to the framework at localization and filtering level and its combination should
provide a more accurate localization. This level incorporates a new fused image
list, containing the results of its operation, to the common model.
Search WWH ::




Custom Search