Information Technology Reference
In-Depth Information
1
3 i
1
3 i
1
s 0
Ψ
Fig. 2. A nonhomogeneous discrete-time Markov chain
probabilities and use BDD-based symbolic model checking [4]. Other models
may require more advanced techniques (see, e.g., [13]). A discussion of concrete
techniques for reachability analysis is beyond the scope of this paper. Clarke
et al. [6] cover this topic in great depth.
For some nonhomogeneous Markov chains, termination may not be guaran-
teed (with probability 1) even after states have been made absorbing based
on reachability analysis. Consider the nonhomogeneous Markov chain in Fig. 2,
where i represents time. While EF Ψholdsin s 0 , the probability measure of
trajectories that start in s 0 and never terminate is approximately 0 . 56. Hence,
if we applied the reachability-based approach to this model, more than half of
the sample trajectories would never terminate.
This example shows that the reachability approach is not applicable to all
Markov chains. The following theorem, however, identifies a large class of models
for which this approach is applicable. This theorem, too, is standard in Markov
chain theory (see, e.g., [2]).
Theorem 2. If
is a finite-state homogeneous discrete-time Markov chain,
then the probability measure is zero for the set of infinite trajectories of
M
M R .
3.2
Sampling-Based Method with Termination Probability
The first solution method cannot be used if reachability analysis is ineffective as
exemplified by the model in Fig. 2. In the case of infinite-state systems, reacha-
bility analysis may not even be feasible.
To ensure finite trajectories without relying on reachability analysis, we can
introduce a termination probability p T < 1, which is used as follows. Start with
the stochastic discrete-event system M .Let M T be the system we get if before
each transition out of a non-absorbing state in M we terminate execution pre-
maturely with probability p T . Concretely, for a discrete-time Markov chain with
transition probabilities p ij ( n ), we construct a new discrete-time Markov chain
with transition probabilities (1
p ij ( n ). Figure 1(c) shows the result of this
transformation on the model in Fig. 1(a) using termination probability p T =0 . 1
(we note later on that there are limitations on the choice of p T —0 . 1 is not an
admissible choice for all models). Each transition probability in
p T )
·
M
is multiplied
by 1
p T to obtain the corresponding transition probability in
M T . For example,
0 . 5 becomes (1
0 . 1)
·
0 . 5=0 . 45. The outgoing transition probabilities now sum
to 1
p T for all non-absorbing states. In reality, of course, we never construct
the new Markov chain. Instead we just take the termination probability into
account when we generate sample trajectories. At each state, we terminate the
generation of the trajectory prematurely with probability p T .
Search WWH ::




Custom Search