Information Technology Reference
In-Depth Information
denote the number of state transitions along the trajectory σ . Define
the random variable X T : Path T ( s )
Let
|
σ
|
[0 ,
) as follows:
X T ( σ )= (1
p T ) −|σ|
if
M T
|
U
Ψ
.
(10)
0
if
M T
|
U
Ψ
Trajectories that satisfy Φ
Ψ are finite as they must terminate in a Ψ-satisfying
state, so (10) is well-defined. Note the negative exponent, which means that
the weight of a satisfying trajectory grows exponentially in the length of the
trajectory. This construction is due to von Neumann and Ulam (see [10,11]) as
a way to compute the inverse of a matrix by the Monte Carlo method.
U
Theorem 3. Let X be the random variable defined in (6) and let X T be the
random variable defined in (10). The expectation of X T isthesameastheex-
pectation of X : E[ X T ]=E[ X ] .
We can thus use observations of X T instead of X to solve the model-checking
problem
Ψ]. Unlike X R , which represents a Bernoulli trial,
the distribution of X T is unknown. Therefore we cannot use the same ecient
hypothesis-testing techniques as before. However, because E[ X T ]=E[ X ]wecan
use an estimation-based approach. If we can obtain an estimate p of E[ X T ], then
we can decide
M
,s
|
=
P θ
U
P θ
U
Ψ] by comparing p to the threshold θ . While model
checking using
M T requires more expensive sampling techniques than
M R ,we
can show that it is more generally applicable.
Theorem 4. The probability measure is zero for the set of infinite trajectories
of
M T .
Note that Theorem 4 does not depend on any property of
, so we are guaran-
teed (with probability one) finite sample trajectories for any model. For example,
Theorem 4 applies to the nonhomogeneous Markov chain in Fig. 2, as well as to
any infinite-state Markov process and even general discrete-event systems.
It remains to find a way to estimate E[ X T ]. Chow and Robbins [5] provide such
a method. Their method is a sequential procedure for computing a fixed-width
confidence interval for a random variable with unknown but finite variance. We
can use their procedure to obtain a confidence interval for E[ X T ]ofwidth2 δ
centered around a point estimate p with coverage probability at least 1
M
α .Let
x i be the i th observation of X T and let x n be the arithmetic mean of the first
n observations. Furthermore, let a 1 ,a 2 ,... be a sequence of positive constants
such that lim n→∞ a n = Φ 1 (1
2 ), where Φ 1 is the inverse standard normal
cumulative distribution function (in practice, we can choose a n to be the 1
α
α
2
quantile of the t -distribution with n
1 degrees of freedom). The stopping rule
for the sequential procedure is then given by [5, Eqn. 3]:
N =inf n
n
δ 2 n
a n
1: 1
n + 1
x n ) 2
( x i
(11)
n
i =1
 
Search WWH ::




Custom Search