Biology Reference
In-Depth Information
Exercise 9.2. If a Markov chain has a nonzero probability for transitioning from a
state to itself, it is possible that a path of the process will feature “runs” of the same
symbols, indicating that the process remains in that same state for more than one
step. For the Markov chain from Figure 9.5 , those will be runs of U s and F s. Gen-
a FF a FU
a UF a UU
eralizing, assume that the transition matrix in Example 9.1 is P
=
=
p
, where 0
1
p
<
p
,
q
<
1.
1
qq
Show that the distribution of the lengths of the runs is as follows:
1. For runs of length of at least l that do not start at beginning of the sequence
P
p l 1
(
a run of Fs of length of at least l
) = (
1
q
)
,
q l 1
P
(
a run of Us of length of at least l
) = (
1
p
)
.
2. For runs of length at least l that start at beginning of the sequence
P
a 0 F p l 1
(
a run of Fs of length of at least l
) =
,
a 0 U q l 1
P
(
a run of Us of length of at least l
) =
.
Exercise 9.3. In the context of Exercise 9.2 , show the following: (1) When p and
q are fixed, the probability for runs of length at least l decreases exponentially as
l increases. (2) When is fixed, the probability to observe a run of length at least l
increases as the probabilities p and q increase.
9.3.2 Hidden Markov Models (HMMs)
HMMs generalize Markov chains by assuming that the process described by the
Markov chain is not readily observable (it is hidden ). According to some rules, each
hidden state generates (emits) a symbol and only the sequence of emitted symbols
is observed. The following example, inspired by the Occasionally Dishonest Casino
example by Durbin et al. [ 23 ], illustrates the idea.
Example 9.2. The outcome of a game is determined by the roll of a die. If the number
of points on the die is 1, 2, 3, or 4, the player wins, and if the number of points is 5 or
6, the player loses. The casino uses a fair or an unfair die for each game. For the fair
die, all outcomes are equally likely but the unfair die is heavily biased toward 6 with
p
5. The process of switching between
the dice is a Markov chain with state space Q
(
i
) =
0
.
1for i
=
1
,
2
,
3
,
4
,
5 and p
(
6
) =
0
.
={
F
,
U
}
, transition matrix P
=
0
, and initial distribution a 0 F =
.
95 0
.
05
5, just as in Example 9.1 .
The player is unaware which die is being used for each game or when a switch between
the fair and unfair dice occurs. Thus the paths
0
.
5
,
a 0 U =
0
.
0
.
10
0
.
9
of this Markov process are hidden.
The player records wins and losses from consecutive games in a sequence of W s
and L s (e.g., x
π
WWLLLLLWLL ), where W stands for a win and L stands
for a loss. This way, for each sequence of games, the player generates a record
=
 
Search WWH ::




Custom Search