Information Technology Reference
In-Depth Information
Fig. 7.1 HMM description
of the class-conditionally
independence between
successive observation
vectors
with simulated and real data showing that the classification error percentage can be
reduced by SICAMM in comparison with ICAMM.
7.1.2 Sequential ICAMM
To compute pC k ð n Þ= X ð n ½ ; we start from Eq. ( 7.1 ). We assume that, conditional
on C k ð m Þ; the observed vectors x ð m Þ m ¼ 0...n are independent. This is a key
assumption in the classical Hidden Markov Model (HMM) [ 8 ] structure that is
described in Fig. 7.1 . Statistical dependences between two successive instants are
defined by the arrows connecting the successive classes. However, successive
observed vectors are not directly connected, i.e., the distribution of every x ð m Þ is
totally defined if we know the corresponding class C k ð m Þ: In particular, this
implies that p x ð n Þ= C k ð n Þ
½
¼ p x ð n Þ= C k ð n Þ
½
p x ð n 1 Þ= C k ð n Þ
½
: We developed
the details of the SICAMM algorithm in [ 9 ].
Let us describe the SICAMM algorithm in a more specific form. We assume
that the parameters A k ; b k ; p ½ s k k ¼ 1...K have been previously estimated by
means of an ICAMM learning algorithm from the several algorithms available in
the literature and that the class-transition probabilities are also known or estimated.
Table 7.1 describes the algorithm.
Note that the SICAMM algorithm can be expressed in the form of a sequential
Bayesian processor [ 8 ]
W k ð n Þ¼ p x ð n Þ= C k ð n ½
p x ð n Þ= X ð n 1 Þ
pC k ð n Þ= X ð n Þ
½
¼ W k ð n Þ p ½ C k ð n Þ= X ð n 1 Þ
;
ð 7 : 3 Þ
½
where pC k ð n Þ= X ð n 1 ½ is a ''prediction'' of the current class given the past
history of observations and where W k ð n Þ is an ''updating weight'' that measures
the significance of the current class relative to the significance of the past history
of observations for generating the current observation.
Search WWH ::




Custom Search