Biology Reference
In-Depth Information
x
=
x 1 x 2 x 3 ···
x l of wins and losses with x t
M
={
W
,
L
}
, denoting the out-
come of game t
l . The chances of winning or losing a game depend on
the choice of die used for that game. We can think of each x t as generated (emitted)
by the hidden state
,
t
=
1
,
2
,...,
π t with a certain probability. Since a fair die gives the player a
win with probability 3
67 and a loss with probability 3
=
0
.
=
0
.
33, we will say
that the fair die emits a W with probability e F (
W
) =
0
.
67 and that it emits an L
with probability e F (
L
) =
0
.
33. Similarly, the unfair die emits a W with probability
e U (
W
) =
0
.
4 and an L with probability e U (
L
) =
0
.
6. The set M
={
W
,
L
}
is the set
of emitted states for the hidden process.
The main difference between Markov chains and hidden Markov chains is that the
observed sequence x cannot be mapped to a unique path of state-to-state transitions for
the hidden process. Multiple hidden paths
can generate x and, as our next example
illustrates, they do so with different probabilities.
Example 9.3. In the context of Example 9.2 , assume the following sequence of rolls
is generated for four consecutive games: z
π
=
2561. The recorded (observed) sequence
of wins and losses is then x
=
WLLW . Any hidden sequence
π = π 1 π 2 π 3 π 4 ,
π t
, could have generated this sequence but different sequences will do
so with very different probabilities. To illustrate, we will compute the probabilities
P
∈{
F
,
U
}
, π)
(
, π)
(
x
and P
x
that x is generated by each of the following hidden sequences:
FUUU and π =
π =
FUFU .For P
(
x
, π)
, we obtain P
(
x
, π) =
0
.
5
·
e F (
W
) ·
a FU ·
e U (
L
) ·
a UU ·
e U (
L
) ·
a UU ·
e U (
W
) =
0
.
000012
.
The rationale is as follows. For x and
π
to occur, a series of events should take place: the fair die is chosen for the first game,
emitting a W ; a switch from the fair to the unfair die follows and the unfair die emits an
L ; the unfair die is retained for the next game, emitting an L , and so on. The probability
P
(
x
, π)
is then the product of the transition and emission probabilities. Similarly,
, π) =
P
(
x
0
.
5
·
e F (
W
) ·
a FU ·
e U (
L
) ·
a UF ·
e F (
L
) ·
a FU ·
e U (
W
) =
0
.
0000067,
which is different from P
(
x
, π)
. In principle, if we were to guess which hidden
path
generated the observed sequence x , the smart way to do so would be to find
the path
π
π
is the largest. 4 The probability P
for which the probability P
(
x
,π)
(
x
)
of emitting the sequence x , is the sum over all P
(
x
,π)
probabilities for all hidden
) = π
sequences
.
Example 9.4. (The Occasionally Dishonest Casino example from [ 23 ]). Assume
the hidden process that switches between the fair and unfair die is as described in
Example 9.2 but now the player bets on the specific number of points rolled at each
game. It would then make sense to record the sequence z
π
: P
(
x
P
(
x
,π)
=
z 1 z 2 ···
z l where z t
{
1
,
2
,
3
,
4
,
5
,
6
}
stands for the number of points rolled in game t
,
t
=
1
,
2
,
3
,...,
l .
The hidden states F and U now emit the symbols from the set M
={
1
,
2
,
3
,
4
,
5
,
6
}
1
6
with probabilities e F (
1
) =
e F (
2
) =
e F (
3
) =
e F (
4
) =
e F (
5
) =
e F (
6
) =
=
0
.
1667 and e U (
1
) =
e U (
2
) =
e U (
3
) =
e U (
4
) =
e U (
5
) =
0
.
1 and e U (
6
) =
0
.
5. We
will use this HMM for several of the examples and exercises in Section 4.
4 We will show how this can be done in a computationally efficient way in Section 4.
 
Search WWH ::




Custom Search