Information Technology Reference
In-Depth Information
to each variable. Then P ( x 1 ,...,x n )= P ( x i |
parents ( X i )),where parents ( X i )
denotes the values of Parents ( X i ) that appear in x 1 ,...,x n .
A dynamic Bayesian Network (DBN) is a Bayesian network that relates random vari-
ables to each other over adjacent time steps. Moreover, some variables are observable,
and some are not. Let X t denote the set of state variables at time t . State variables are
assumed to be unobservable. Let O t denote the set of observable variables at time t .
The observation at time t is O t = o t for some set of values o t .
A Hidden Markov Model (HMM) is a special kind of DBN; specifically, an HMM is
a DBN with a single state variable and a single observable variable. We refer to a value
of the observable variable of an HMM as an observable action .
To construct a DBN, one must specify the prior distribution P ( X 0 ), capturing the
initial state distribution; the transition model P ( X t |
X t− 1 ), capturing the dependency
of the next state on the current state; and the observation model P ( O t |
X t ), encoding
the dependency of the observation on the current state. The transition and observation
models are represented as a Bayesian network.
Particle filtering (PF) is a sequential Monte Carlo method that can be used to per-
form state estimation in a Bayesian network [7]. PF can be used to estimate the state
probability distribution P ( X t ), given an observation sequence o 1: t . In one of the most
commonly used forms of particle filtering, known as sequential importance resampling
(SIR), a population of N p particles is first created and assigned initial states by sam-
pling from the prior distribution P ( X 0 ). A three-step update cycle is then repeated
for each time step: (i) each particle is propagated forward by sampling the new state
value x t given the previous state x t− 1 of the particle, based on the transition model
P ( X t |
x t− 1 ); (ii) each particle is weighted by the probability it assigns to the new
evidence, P ( o t |
x t ); (iii) the population is resampled , i.e., a new population of N p
(unweighted) particles is created, where each new particle is selected from the current
population, and the probability that a particular particle is selected is proportional to
its weight. Resampling focuses the particles on the high-probability regions of the state
space, by probabilistically discarding particles with low weight and duplicating parti-
cles with high weight.
One can reduce the variance of PF by using evidence o t in the first step of the update
cycle by sampling the next state x t from the conditional probability distribution P ( X t |
x t− 1 , o t ). As we show in Section 5, this probability distribution can be computed as
P ( x t |
x t− 1 ) is
possible if the HMM transition probabilities and observation probabilities are given
explicitly. By reducing the variance, the resampling frequency (which is a considerable
performance bottleneck) can also be reduced.
PF approximates P ( x t |
x t− 1 ,o t )= P ( x t |
x t− 1 )
·
P ( o t |
x t ) / P ( o t |
x t− 1 ). Precomputing P ( o t |
o 1: t ), the probability of state x t after observation sequence
N ( x t | o 1: t )
i =1
1
N p
o 1: t ,by
o 1: t ) is the number of particles in state
x t after processing observations o 1 ,..., o t and w i are the weights of the individual
particles which are in state x t .
Runtime Verification with State Estimation (RVSE) [10] is an algorithm for runtime
verification in the presence of observation gaps. In RVSE, a Hidden Markov Model
of the monitored program is constructed, monitored event sequences are treated as ob-
servation sequences of the HMM, and an extension of the optimal forward algorithm
w i ,where N ( x t |
 
Search WWH ::




Custom Search