Graphics Reference
In-Depth Information
x j is represented by a feature vector x ( x j ) in R d , for example, the
head velocities at each frame. For each sequence, a vector of “sub-
structure” variables h = { h 1 , h 2 , ..., h m } is assumed. These variables are
not observed in the training examples and will therefore form a set
of hidden variables in the model.
Given the above definitions, the latent conditional model is defined
as:
h
P
(
y
x,
θ
)
=
P
(
y
h,
x,
θ
)
P
(
h
x,
θ
)
1
where θ are the parameters of the Latent-Dynamic CRF model. These
are learned automatically during training using a gradient ascent
approach to search for the optimal parameter values. More details can
be found in Morency et al. (2007).
The Latent-Dynamic CRF model was applied to the problem of
learning individual dynamics of backchannel feedback. Figure 7 shows
y
y
y
y
y
h 1
h 2
h 3
h 4
h n
x 1
x 2
x 3
x 4
x n
Figure 6. Graphical representation of the LDCRF model. x j represents the j th observation
(corresponding to the j th observation of the sequence), h j is a hidden state assigned to x j ,
and y j the class label of x j (i.e., positive or negative). Gray circles are observed variables.
Figure 7. Recognition of backchannel feedback based on individual dynamics only.
Comparison of the Latent-Dynamic CRF model with previous approaches for probabilistic
sequential modeling.
(Color image of this fi gure appears in the color plate section at the end of the topic.)
Search WWH ::




Custom Search