Information Technology Reference
In-Depth Information
Semantic labeling of the environment
Knowledge Base (KB)
Relationship between
objects and activities
Markov Logic Network (MLN)
Activities
Camera network
Image features and geometric features
(from individual cameras or multiple camers)
(a)
Foreground extraction
Camera network
Spatiotemporal
features
Conditional Random
Field (CRF)
Face
detection
Location
estimation
standing
lying
sitting
vaccuming
cutting
eating
watching
walking
reading
typing
scrambling
(b)
Fig. 1. Overview of system modules. (a) Layered structure for object recognition
through human activities in the smart home. (b) Hierarchical activity analysis through
different types of image features. The activities detected in the smart home are shown
in ellipses.
the hierarchical activity analysis can be found in Fig. 1(b). This step yields the
location and activity of the person. Note that not all activities shown in Fig. 1(b)
are used in object recognition (Table 1) because some activities are not directly
related to the environment objects.
In the second step, the room is divided into grids of size 30 cm × 30 cm and
object type of each grid is inferred with the activity observed in that grid. Object-
activity relationship is defined in the knowledge base of MLN. Activity obser-
vations are converted into evidence predicates to input to the MLN model. The
related MLN variables and formulas are activated and converted into MRF to
infer object type probability related to the activity. Finally, each grid in the room
will have a probability distribution over all object types, so that the objects are
identified as grids with high probability of its type.
Search WWH ::




Custom Search