Biology Reference
In-Depth Information
Algorithm 4.2
Logic Sampling Algorithm
1. Order the variables in
X
according to the topological ordering implied by
G
,say
X
(
1
)
≺
X
(
2
)
≺ ...≺
X
(
p
)
.
2. For a suitably large number of samples
x
∗
=(
x
1
,...,
x
p
)
:
p
, generate
x
(
i
)
a. for
i
=
1
,...,
from
X
(
i
)
|
Π
;
X
(
i
)
b. if
x
includes
E
,set
n
E
=
n
E
+
1;
=
q
and
E
,set
n
E
,
q
=
n
E
,
q
+
c. if
x
includes both
Q
1.
(
|
,
,
Θ
)
with
n
E
,
q
/
3. Estimate P
Q
E
G
n
E
.
of this algorithm are beyond the scope of this topic. A detailed explanation along
with step-by-step examples can be found in
Korb and Nicholson
(
2010
)and
Koller
and Friedman
(
2009
).
Approximate inference algorithms use Monte Carlo simulations to sample from
the local distributions and thus estimate P
(
|
,
,
Θ
)
(
|
,
,
Θ
)
. In partic-
ular, they generate a large number of samples from
B
and estimate the relevant
conditional probabilities by weighting the samples that include both
E
and
Q
Q
E
G
or
f
Q
E
G
q
against those that include only
E
. In computer science, these random samples are of-
ten called
particles
, and the algorithms that make use of them are known as
particle
filters
or
particle-based methods
.
Many approaches have been developed for both random sampling, weighting,
and their combination. This has resulted in several approximate algorithms. Ran-
dom sampling ranges from the generation of independent samples to more com-
plex Markov chain Monte Carlo (MCMC) schemes. For a gentle introduction to
the subject, we refer the reader to
Robert and Casella
(
2009
). Common choices are
either
rejection sampling
or
importance sampling
. Furthermore, weight functions
range from the uniform distribution to likelihood functions to various estimates
of posterior probability. The simplest combination of these sampling and weight-
ing approaches is known as either
forward
or
logic sampling
. It is described in
Algorithm
4.2
and illustrated in detail in both
Korb and Nicholson
(
2010
)and
Koller
and Friedman
(
2009
). Logic sampling combines rejection sampling and uniform
weights, essentially counting the proportion of generated samples including
E
that
also include
Q
=
is
small, because most particles will be discarded without contributing to the esti-
mation of P
=
q
. Clearly, such an algorithm can be very inefficient if P
(
E
)
. However, its simplicity makes it easy to implement and
very general in its application; it allows for very complex specifications of
E
and
Q
for both
MAP
(
Q
|
E
,
G
,
Θ
)
. At the other end of the spectrum, com-
plex approximate algorithms such as the
adaptive importance sampling
scheme by
Cheng and Druzdel
(
2000
) can estimate conditional probabilities as small as 10
−
41
.
They also perform better on large networks. However, their assumptions often re-
strict them to discrete data and may require the specification of nontrivial tuning
parameters.
(
Q
|
E
,
B
)
and
CPQ
(
Q
|
E
,
B
)
Search WWH ::
Custom Search