Information Technology Reference
In-Depth Information
Algorithm 2. Algorithm to separate domains
Input : Original domain D 0
Output : A set of separate domains D = { D 1 , D 2 ,..., D M }
for each activity a i do
Find the set of universal formulas F u = {f 1 ,f 2 ,...} .
Find the set of non-universal formulas related to a i : F nu = {f n 1 ,f n 2 ,...} .
Gettheobjectsetin F nu : O i .
Convert F nu in conditional pdf form F nu .
Form domain D i = {a i ,O i ,F u ,F nu } .
end for
for each grid associated with activity a i , the sub-domain D i is activated. Since
D i only changes probability of objects in its domain, probability of objects not
in the domain is scaled to ensure all object probabilities sum up to 1.
Algorithm 2 shows the algorithm to convert the main domain D 0 of Knowl-
edge Base 1 into a set of separate domains
.Universal
formulas refer to those that apply to all activities or object types. In Algorithm 2
F nu are converted to conditional pdf form of formulas, because we would like to
ground the MLN into a Dynamic Bayesian network (DBN) which is a directed
graph, so that Prob(object,t) depends on activity observations before t .
D
=
{ D 1 , D 2 ,..., D M }
6 Experiments
We conducted the experiments in a test-bed smart home environment, called the
AIR (Ambient Intelligent Research) Lab. It is a smart studio located at Stanford
University (Fig. 3). It consists of a living room, kitchen, dining area, and study
area. The testbed is equipped with a network of cameras, a large-screen TV, a
digital window (projected wall), handheld PDA devices, appliances, and wireless
controllers for lights and ambient colors. Fig. 4 shows snapshots of several users
engaged in different activities. Our video data involve four users. There are six
scenarios in total, each captured by 5 synchronized cameras. In the scenario, one
user does different activities at his/her own choice of sequence, for around 10
minutes. The activity models are trained on a different dataset described in [16].
To evaluate recognition rate, the object types are labeled on the grids and
compared with inference results (Table 2). In Table 2, results are processed at
the end of each scenario, and the precision shown is calculated by putting results
from all scenarios together. Recall is obtained by calculating how many of the
labeled grids for an object are covered correctly after inference. For each grid, the
object type with the highest probability is chosen as the object type for that grid.
Fig. 5 shows the room schematic overlayed on grids, with different color showing
different objects. From Table 2 we can see that recall is generally lower, because
we may not have enough observations that cover all possible object locations,
e.g., there is a large floor area the person has not walked into. However, part
of the objects are covered after recognition. Besides, there is usually a shift
between the recognized object position and the real object position. This is
 
Search WWH ::




Custom Search