Databases Reference
In-Depth Information
and pictures but at it is necessary to devise some way to express interdirectional
references or other kind of “flat structure” information between the objects, it is
hard to say that the object is always appropriate.
The goal of the present paper was to realize a way of core data modelling based
on mediators that correspond to both the “meaning” and the actual data and to
evaluate its feasibility. Therefore in this paper, to make it a test case of scene
database design, with a design technique as much general-purpose as possible,
and aiming to realize expression from the viewpoint of flat structure, we tried to
express information based on an entity-based data model.
The schema database shown on Figure 14 is described using the AIS (Associative
Information Structure) model 16) . The AIS model based on the ER model 17) is an
entity-based data model; it expresses information using entities that represent the
objects and phenomena of the real world and associations that express their
relations. We already proposed a general data manipulation language for the AIS
model, the MMQL 18) , which can be used for database design from both the direction
of data description and data manipulation. In Figure 14 the rectangles are expressing
entity types and the small circles O represent entities of the given type. The
relationships between the types can be direct (expressed by lozenges) multi-valued
attributes (double arrow), etc., and a solid line between entities shows
correspondence of the instances.
The information expressed by the schema in Figure 14 is the following.
1.
As information about the acquired scene ( SCENE ), we store the objects
that make the background of the scene ( BACKGROUND_OBJECT ), the
points of time in the scene ( POINT_OF_TIME and the sampled data,
including the pictures of the scene taken by a stereo camera pair
( FRAME_PAIR, FRAME ), etc.
2.
We store the acquired Shape of Individual Model as an entity ( SHAPE_OF
_INDIVIDUAL_MODEL ), with its polygon information.
3.
We express the movement of the objects participating in the scene
( MOVEMENT ) by using entities that express the occurrence of the Shape of
Individual Model of those objects in the scene ( OCCURRENCE ) at given
points of time, and as a posture in these points of time, we store these with
their coordinates and rotation angles ( TRANSLATE, ROTATE ).
In other words, the movement information of the objects in the scene is expressed
as a series of position and rotation data of the Shape of Individual Model in time,
and thus we can construct a scene database by mapping the scene into virtual CG
space. As in this database the information about background objects building up
the scene is expressed by a framework that unifies the sampled data (video pictures
taken with a stereo camera pair, etc.), it is possible to realize several types of
answers to the different questions, e.g. a video picture as a query result. As described
above, we managed to construct a scene database that uses virtual CG space.
Search WWH ::




Custom Search