Biology Reference
In-Depth Information
cpdist takes as arguments a bn.fit object describing the Bayesian network,
the labels of one or more query nodes , and a logical expression describing the
evidence . The latter works in the same way as the analogous argument for the
subset function in package base . cpdist returns a data frame containing the
particles generated by logic sampling that include evidence .
On the other hand, cpquery returns the probability of a specific event ,de-
scribed by another logical expression. So, for example,
> cpquery(fitted,
+ event = (pakts473 == "LOW") & (PKA != "HIGH"),
+ evidence = (p44.42 == "LOW") | (praf == "LOW"))
[1] 0.5594823
The combination of events and evidence on different variables through the use of
vectorized operators such as != , == , & , | , %in% provides a versatile interface
for specifying conditional probability queries. This is particularly important when
performing inference on Gaussian Bayesian networks, because in this setting both
event and evidence are regions in a real space. Therefore, complex combina-
tions of < , <= , >= ,and > are required to describe them.
4.3 Inference in Dynamic Bayesian Networks
Techniques for learning dynamic Bayesian networks are based on the same funda-
mental ideas as the ones for learning static networks, as we have seen in Chap. 3 for
the dynamic Bayesian networks based on VAR models. The same is true for infer-
ence. The most common type of query for such models is to compute the marginal
distribution of a node X i at a time t conditional on other nodes at times 1
,...,
T :
=
If T
t , the query is called filtering and consists in querying the state of the
network at the current time given all the available information.
If T
t , the query is called smoothing and consists in reducing or removing noise
from past time points using the information we have collected in the mean time.
>
If T
<
t , the query is a prediction .
Several exact and approximate inference algorithms specific to dynamic Bayesian
networks have been presented in literature. Popular ones include the forward-
backward algorithm, the frontier algorithm, the interface algorithm, the Boyen-
Koller (BK) algorithm, the Factored Frontier (FF) algorithm, and the Loopy Be-
lief Propagation (LBP) algorithm. For an overview of such approaches, we refer
the reader to Murphy's PhD thesis ( Murphy 2002 ). However, the techniques we
have been using in the previous section can also be applied to dynamic Bayesian
networks.
Most probable explanation queries can be performed for all of filtering, smooth-
ing, and predictions, as shown in Fig. 4.2 for the LASSO model fitted from the
arth12 data set with lars in Sect. 3.5.2 .
Search WWH ::




Custom Search