Biology Reference
In-Depth Information
by Koller and Friedman ( 2009 ), low values result in a good balance between the
smoothing effect of the uniform prior distribution and the accuracy of the model, so
we set N=5 .
Once the prior distribution has been set up, we can fit the posterior distribution
for net and use it as the starting point of a hill-climbing search for the network
structure with the highest posterior density.
> net = learn(net, marks, prior)$nw
> best = autosearch(net, marks, prior)
> mstring = deal::modelstring(best$nw)
> mstring
[1] "[MECH|ALG][VECT|MECH:ALG][ALG|ANL][ANL]
[STAT|ALG:ANL]"
As we can see from the model string, the network returned by the heuristic search
is very similar but not identical to the one returned by the implementation of
hill-climbing present in bnlearn . This could be the result of using different pa-
rameters for the search (i.e., the BIC score instead of the posterior density, different
default values for the imaginary sample size, etc.). However, we can show that the
two networks have the same score and are therefore both optimal.
> bn.deal = model2network(mstring)
> bnlearn::score(bn.deal, marks, type = "bge",
iss = 5)
[1] -1725.729
> bn.hc = hc(marks)
> bnlearn::score(bn.hc, marks, type = "bge", iss = 5)
[1] -1725.729
Note that the double-colon syntax ( deal:: and bnlearn:: ) is required to ex-
ecute the correct function, because both bnlearn and deal provide functions called
modelstring .
The stability of this network structure can be confirmed by the use of a second
search with random restarts, which are also implemented in bnlearn for hc (and
controlled with the restart and perturb arguments).
> heuristic = heuristic(best$nw, marks, prior,
+
restart = 2, trylist = best$trylist)
2.3.5 Parameter Learning
Once we have learned the network structure, we can estimate the parameters of the
local distributions. For every package with the exception of bnlearn this step is
executed by the same functions that learn the structure of the network. In addition,
only one estimator is implemented: either a maximum likelihood estimator or a
Bayesian one.
Search WWH ::




Custom Search