Information Technology Reference
In-Depth Information
off, such that the use of a short- or long-term overhead budget is maximized and never
exceeded. Gaps in monitoring, however, introduce uncertainty in the monitoring results.
To quantify the uncertainty, one can estimate the current state of the program. We de-
veloped a framework for this, called Runtime Verification with State Estimation
(RVSE) [10], in which a hidden Markov model (HMM) is used to succinctly model
the program and compute the uncertainty in predictions due to incomplete information.
While monitoring is on, the observed program events drive the transitions of the
property checker, modeled as a deterministic finite automaton (DFA). They also pro-
vide information used to help correct the state estimates (specifically, state probability
distributions) computed from the HMM transition probabilities, by comparing the out-
put probabilities in each state with the observed outputs. When monitoring is off, the
transition probabilities in the HMM alone determine the updated state estimate after the
gap, and the output probabilities in the HMM drive the transitions of the DFA. Each
gap is characterized by a gap length distribution , which is a probability distribution for
the number of missed observations during that gap.
Our algorithm was based on an optimal state estimation algorithm, known as the
forward algorithm, extended to handle gaps. Unfortunately, this algorithm incurs high
overhead, especially for longer sequences of gaps, because it involves repeated matrix
multiplications using the observation-probability and transition-probability matrices. In
our measurements, this was often more than a factor of 10 larger than the overhead of
monitoring the events themselves!
To reduce the runtime overhead, we developed a version of the algorithm, which we
call approximate precomputed RVSE (AP-RVSE), that precomputes the matrix calcula-
tions and stores the results in a table [1]. Essentially, AP-RVSE precomputes a poten-
tially infinite graph unfolding, where nodes are labeled with state probability distribu-
tions, and edges are labeled with transitions. To ensure the table is finite, we introduced
an approximation in the calculations, controlled by an accuracy parameter :ifanewly
computed matrix differs from the matrix on an existing node by at most according to
the L 1 -norm, then we re-use the existing node instead of creating a new one. With this
algorithm, the runtime overhead is low, independent of the desired accuracy, but higher
accuracy requires larger tables, and the memory requirements could become problem-
atic. Also, if the set of gap length distributions that may appear in an execution is not
known in advance, precomputation is infeasible.
This paper introduces an alternative approach, called Runtime Verification with Par-
ticle Filtering (RVPF), to control the balance among runtime overhead, memory usage,
and prediction accuracy. In particle filtering (PF) [7], the probability distribution of
states is approximated by the proportion of particles in each state. The particle filtering
process works in three recurring steps. First, the particles are advanced to their suc-
cessor states by sampling from the HMM's transition probability distribution. Second,
each particle is assigned a weight corresponding to the output probability of the ob-
served program event. Third, the particles are resampled according to the normalized
weights from the second step; this has the effect of redistributing the particles so that
they provide a better prediction of the program events.
To reduce the variance of PF, we exploit the knowledge of the current program event
and the particular structure of the DBN and employ a variant of PF known as sequential
 
Search WWH ::




Custom Search