Information Technology Reference
In-Depth Information
We can now use p = x N as a point estimate for E[ X T ], and accept
Ψ] as
true if and only if . As shown by Younes [25], this gives us a model-checking
procedure that satisfies conditions (4) and (5) with β = α .
It should be noted that the procedure of Chow and Robbins provides asymp-
totic guarantees only, meaning that the coverage probability of the confidence
interval is guaranteed to be 1
P θ
U
α in the limit as δ approaches 0. In practice,
the coverage probability can be somewhat less than 1
α for any selected δ ,
no matter how small, as has been shown for the normal distribution [9]. On the
other hand, empirical coverage tends to be greater than 1
α for Bernoulli ran-
dom variables. Further empirical studies are needed to determine the empirical
coverage for the type of random variables we have here, which are neither normal
nor Bernoulli, but this is beyond the scope of our paper.
A prerequisite for using the procedure of Chow and Robbins is that X T has
finite variance. This restriction effectively limits our choice of the termination
probability p T . For finite-state homogeneous Markov chains, we have the follow-
ing theoretical result:
Theorem 5. Let P be the transition probability matrix for
M
(the original
model). X T has finite variance iff p T < 1
ρ ( P ) ,where ρ ( P ) is the subdominant
(second-largest) eigenvalue of P .
In practice, computing the subdominant eigenvalue of a stochastic matrix is no
easier than solving the model-checking problem at hand, so choosing the right
value for p T is not trivial. In Sect. 5, we apply the algorithm to parametric models
[6] and compute ρ ( P ) for small models to find a p T that is likely to give finite
variance for larger variations of the same basic model. It is important to note,
however, that numerical iterative solution methods suffer from the exact same
problem as discussed in the next section.
4 Related Work
To verify the formula
Ψ] in some state s , we can first compute the
probability measure p of the set of trajectories that start in s and satisfy Φ
P θ
U
Ψ,
andthencompare p to θ . A numerical computation of p for any state of a Markov-
chain model amounts to the solution of a set of linear equations specified as
follows (cf. [1]). Let P be the transition probability matrix of the Markov chain
and let P equal P , with the exception that states satisfying
U
¬
Φ
Ψ have been
v
made absorbing. Furthermore, let
be a binary column vector with a 1 in each
row corresponding to a state that satisfies Ψ. Then
p
is the solution to
= P
p
· p
+
v
.
(12)
P )
The equation system in (12) can be written as ( I
and solved us-
ing Gaussian elimination, which is guaranteed to be polynomial in the size of
the state space. This approach is memory intensive, however, and also suffers
from numerical instability. For these reasons, Iterative solution methods, such
as Jacobi and Gauss-Seidel [22], are typically preferred. The leading tool for
· p
=
v
 
Search WWH ::




Custom Search