Biology Reference
In-Depth Information
significant dependence relationships. Selecting which arcs to include in the network
can be done via cross-validation or by minimizing the fraction of the final L 1 norm
or the mean square error.
3.3.2 James-Stein Shrinkage
An efficient estimator of the covariance matrix can be obtained by “shrinking” the
empirical correlations coefficients towards zero and the empirical variances to their
median. The shrinkage coefficient can be computed in closed form using the ex-
pression provided in Ledoit and Wolf ( 2003 ), making this approach extremely fast.
The resulting correlation matrix has been shown to dominate the empirical one, fol-
lowing classic results on shrinkage from Stein ( 1956 )and James and Stein ( 1961 ).
Its mean square error is never worse than the mean square error of the empirical
correlation matrix.
An application of the James-Stein shrinkage approach to VAR process has been
proposed by Opgen-Rhein and Strimmer ( 2007 ) and shown to outperform many
classic approaches. This can be attributed to improved estimates of the regression
coefficients, which are essentially a function of the covariance matrix of X .The
network structure is then determined by including the arcs in order of decreasing
coefficients. Multiple testing correction to control for false discovery rate (FDR)
can also be used with the local FDR approach introduced by Shafer and Strimmer
( 2005 ).
In the context of static Bayesian networks, James-Stein shrinking is used to com-
pute regularized partial correlations and conditional probabilities to use in condi-
tional independence tests. Several such tests are implemented in bnlearn for use
in constraint-based structure learning algorithms and for independent use via the
ci.test function.
3.3.3 First-Order Conditional Dependencies Approximation
Another powerful approach to learn dynamic Bayesian networks called G1DBN and
proposed by Lebre ( 2009 ) is based on first-order conditional dependencies.
The cornerstone of this approach is the concept of low-order conditional de-
pendence graph , which originated in the context of the theory of graphical model-
ing with directed acyclic graphs. The directed acyclic graph defining the dynamic
Bayesian network is approximated by the first-order conditional dependencies. Un-
der acceptable conditions, the first-order conditional dependencies graph contains
the directed acyclic graph defining the dynamic Bayesian network to be inferred.
By using this approximation, G1DBN implements dynamic Bayesian network
learning as a two-step procedure. First, it learns a directed acyclic graph encoding
first-order partial dependence relationships. Subsequently, it infers the real network
structure of the dynamic Bayesian network using the graph from the previous step.
The former is a subgraph of the latter for linear models.
Search WWH ::




Custom Search