Biology Reference
In-Depth Information
As we can see from the output of cnMatEdges , this learning strategy returns a
network similar to bn.dgs and bn.dhc . However, if we run this example multiple
times, occasionally we will get a network in which the arc between MECH and VECT
is missing. This is the result of the natural sensitivity of simulated annealing to the
values of its parameters, which are known to be difficult to set correctly ( Bouckaert ,
1995 ). If we use the cnSearchOrder function instead of cnSearchSA , thus
limiting the search of the optimal network to the ones with the same node ordering
as bn.dhc , this instability disappears completely.
2.4 Pearl's Causality
In Sect. 2.1 , Bayesian networks have been defined in terms of conditional in-
dependence statements and probabilistic properties, without any implication that
arcs should represent cause-and-effect relationships. The existence of equivalence
classes of networks indistinguishable from a probabilistic point of view provides a
simple proof that arc directions are not indicative of causal effects.
However, from an intuitive point of view, it can be argued that a “good” Bayesian
network should represent the causal structure of the data it is describing. Such net-
works are usually fairly sparse, and their interpretation is at the same time clear and
meaningful, as explained by Pearl ( 2009 ) in his topic on causality:
It seems that if conditional independence judgments are byproducts of stored causal rela-
tionships, then tapping and representing those relationships directly would be a more natu-
ral and more reliable way of expressing what we know or believe about the world. This is
indeed the philosophy behind causal Bayesian networks.
Learning causal models, especially from observational data, presents significant
challenges. In particular, three additional assumptions are needed compared to non-
causal Bayesian network learning:
Each variable X i
X is conditionally independent of its non-effects, both direct
and indirect, given its direct causes. This assumption is called the causal Markov
assumption and represents the causal interpretation of the Markov property
introduced in Sect. 2.1 .
There must exist a network structure which is faithful to the dependence structure
of X .
Theremustbeno latent variables (unobserved variables influencing the variables
in the network) acting as confounding factors . Such variables may induce spu-
rious correlations between the observed variables, thus introducing bias in the
causal network. Even though this is often listed as a separate assumption, it is re-
ally a corollary of the first two: the presence of unobserved variables violates the
faithfulness assumption (because the network structure does not include them)
and possibly the causal Markov property (if an arc is wrongly added between the
observed variables due to the influence of the latent one).
Search WWH ::




Custom Search