Biology Reference
In-Depth Information
> bn.gs2 = gs(alarm, test = "smc-x2")
> bn.iamb2 = iamb(alarm, test = "smc-x2")
> bn.inter2 = inter.iamb(alarm, test = "smc-x2")
> unlist(compare(true, bn.gs2))
> unlist(compare(true, bn.iamb2))
> unlist(compare(true, bn.inter2))
(d) Shrinkage tests improve the results of structure learning much like permutation
tests, but with a much smaller execution time.
> bn.gs3 = gs(alarm, test = "smc-x2", B = 10000,
> alpha = 0.01)
> bn.iamb3 = iamb(alarm, test = "smc-x2",
> B = 10000, alpha = 0.01)
> bn.inter3 = inter.iamb(alarm, test = "smc-x2",
> B = 10000, alpha = 0.01)
> unlist(compare(true, bn.gs3))
> unlist(compare(true, bn.iamb3))
> unlist(compare(true, bn.inter3))
2.5 Consider again the alarm network used in Exercise 2.4 .
(a) Learn its structure with hill-climbing and tabu search, using the posterior
density BDe as a score function. How does the network structure change
with the imaginary sample size iss ?
(b) Does the length of the tabu list have a significant impact on the network
structures learned with tabu ?
(c) How does the BIC score compare with BDe at different sample sizes in
terms of structure and score of the learned network?
(a) In both hill-climbing and tabu search the number of arcs increases with the
value of iss . Since the imaginary sample size determines how much weight
is assigned to the prior distribution compared to the sample, it also controls the
amount of smoothing applied to the posterior density. For this reason, (compar-
atively) large values of iss oversmooth the data and result in widely different
network having similar scores and, in turn, allow too many arcs to be included
in the network.
> par(mfrow = c(2, 5))
> for (iss in c(1, 5, 10, 20, 50)) {
+
bn = hc(alarm, score = "bde", iss = iss)
+
main = paste("hc(..., iss = ", iss, ")",
+
sep = "")
+
sub = paste(narcs(bn), "arcs")
+
graphviz.plot(bn, main = main, sub = sub)
+}
> for (iss in c(1, 5, 10, 20, 50)) {
+
bn = tabu(alarm, score = "bde", iss = iss)
Search WWH ::




Custom Search