Database Reference
In-Depth Information
Fig. 11.11 Correspondence of postures of a gesture instance to their respective BMUs on the
SSOM. These BMUs constitute the states of the Markov models
the trajectory on the k -th sub-codebook SSOM. This is followed by a calculation
of model likelihoods for all possible models, P
Tr i | ʻ k ) ,
(
1
k
K
,
and followed
by selecting the gesture whose model likelihood is the highest, i.e.,
k =
Tr i | ʻ k )]
arg max
1
K [
P
(
(11.22)
k
Tr i | ʻ k )
where P
(
is the probability that the trajectory is generated by HMM
ʻ k .
This probability is calculated by using the forward algorithm [ 355 ].
11.7.3
Obtaining Learning Parameters
The learning phase of HMM determines the method used to adjust the model
parameters
to maximize the probability of the observation sequence for the
given model. We can choose
(
A
,
B
, ˀ )
is locally maximized
using an iterative procedure, the Baum-Welch algorithm. We can summarize the
parameters used to characterize an HMM as such: (1) N S , the number of states in
the model. The states are interconnected in such a way that any state can be reached
from any other state. We denote the individual states as S
ʻ =(
A
,
B
, ˀ )
such that P
(
Tr
| ʻ )
, and the
state at time t as q t ;(2) N O , the number of observation symbols per state. We denote
the individual symbols as V
= {
S 1 ,
S 2 ,...,
S N S }
= {
v 1 ,
v 2 ,...,
v N O }
; (3) The state transition probability
distribution is denoted by A
, and the observation symbolizing probability
distribution in state j is denoted by B
= {
a ij }
= {
b j (
k
) }
. (4) The initial state distribution is
denoted by
ˀ = { ˀ i }
. Define:
ʾ t (
i
,
j
)=
P
(
q t =
S i ,
q t + 1 =
S j |
Tr
, ʻ )
(11.23)
Search WWH ::




Custom Search