Information Technology Reference
In-Depth Information
perspective, and at the same time make it scalable. Such augmented FSMs are our
OPs, which typically form Markov chains, as we describe below.
A Markov chain can be viewed as an FSM where the probabilities associated with
different state transitions satisfy the so-called Markovian or memoryless property:
From the current state X n = i at time n or stage n , the probability of state transition
to state X n+ 1 = j for the next time period or stage n + 1 is denoted as p ij , which is
independent of the history, i.e.,
P {X n+ 1 = j | X n = i, X n− 1 = s n− 1 ,...,X 0 = s 0 }
= P {X n+ 1 = j | X n = i}
= p ij
where 0 p ij 1 and
p ij = 1 .
j
In other words, the probability that the system will be in state j only depends
on the current state i and the history-independent transition probability p ij . Equiv-
alently, the complete history is summarized in the current state, thus removing the
need to look back into the past to determine the next transition to take. This property
is call the memoryless or Markovian property in stochastic processes [22] . A simpli-
fied test of memoryless property has been proposed and successfully used to verify
that web usages can be accurately modeled by Markov chains [26] .
Figure 6 is a sample Markov chain, with probabilistic state transitions. For exam-
ple, after state B the next state to follow is always C , as represented by the transition
probability of p(B, C) = 1. While the states to follow C could be D , with proba-
bility p(C, D) = 0 . 99 for the normal case, or B , with probability p(C, B) = 0 . 01
for the rare occasion that MS is unable to receive paging channel. Notice that we
omitted the input/output information in such Markov OPs to keep the illustration
simple, with the understanding that the input/output information is available for us
to sensitize testing.
Much of the previous work with statistical testing used individual Markov chains
[30,56] . For large systems, a collection of Markov chains might be used for testing,
organized into a hierarchical framework called unified Markov models (UMMs) [20,
49,50] . For example, the top-level Markov chain for call processing in a cellular
communication network is represented in Fig. 6 . However, various sub-operations
may be associated with each individual state in the top-level Markov chain, and could
be modeled by more detailed Markov chains, such as the one in Fig. 7 for expanded
state E . Notice that in some of these Markov chains, the sum of probabilities for
transitions out from a given state may be less than 1, because the external destinations
(and sources) are omitted to keep the models simple. The implicit understanding in
UMMs is that the missing probabilities go to external destinations.
The higher-level operations in a UMM can be expanded into lower-level UMMs
for more thorough testing. Therefore, UMMs are more suitable for large systems
 
Search WWH ::




Custom Search