Biology Reference
In-Depth Information
stating that the probability for the process transitioning to
π t + 1 at time t
+
1 does not
depend on the entire “history” of the process
π 1 π 2 ··· π t but only on its current state
π t .
For any two states i
,
j
Q , we consider the transitional probabilitiy a ij
=
, defined as the probability that the process moves from state i
to state j when the (discrete) time changes from t to t
P
{ π t + 1 =
j
| π t
=
i
}
+
1. When i
=
j
,
a ii is the
probability that the process remains in state i at time t
1. The transition probabilities
are the same for all values of t , meaning that the transitions depend only on the state of
the process and not on when the process visits that state. The transition probabilities
are commonly organized in a transition matrix
+
,
a 11
···
a 1 n
.
.
p
=
a ij
0
,
a n 1
···
a nn
with j Q a ij =
Q . 3 The initial state for the process is determined by
,
1
for all i
Q , where i Q p i
the initial distribution p i
1.
It is often convenient to introduce notation that would allow for the initial
distribution and for the ending of the sequence to be treated as transition probabilities
and included in the notation, a ij ,
=
P
1 =
i
),
i
=
Q . To accomplish this, a hypothetical
“beginning” state B and an “ending” state E are introduced with the assumption that
the process begins at state B at time t
i
,
j
0 with probability 1 and it transitions to E
with probability 1 at the end of each path. The probability for transitioning into B
after time t
=
0 is zero, and the probability of transitioning out of E is zero. With
these additions, each path
=
π = π 1 π 2 ··· π l of the Markov chain can be expanded
to
π =
B
π 1 π 2 ··· π l E
= π 0 π 1 π 2 ··· π l π l + 1 . We can append a superficial state
denoted by 0 to Q
,
Q
={
0
,
s 1 ,...,
s n }
and write a 0 i
=
p i
=
P
1
=
i
| π 0
=
B)
=
l + 1
=
| π l
=
) =
for the initial distribution, for all i
Q , and a i 0
P
E
i
1, for
π = π 0 π 1 π 2 ··· π l , the process moves to E with
probability 1). In what follows we will not explicitly append the symbols B and E
at the beginning and at the end of all paths but, when it is necessary, the transition
probabilities will be interpreted in this generalized sense.
For any observed path
all i
Q (at the end of each path
π = π 0 π 1 π 2 ··· π l , we apply the Markov property to
compute its probability as follows:
P
(π) =
P
0 π 1 π 2 ··· π l ) =
P
{ π l | π l 1 L 2 ,...,π 0 }
P
0 π 1 ··· π l 1 )
=
P
{ π l | π l 1 }
P
0 π 1 π 2 ··· π l 1 ) = ... =
3 This condition simply reflects the fact the process will be in some state from Q at the next time step,
unless it terminates.
 
Search WWH ::




Custom Search