Graphics Reference
In-Depth Information
The original message-passing algorithm was proposed by Pearl [ 362 ]. Detailed
descriptions of loopy belief propagation in computer vision were given by Freeman
et al. [ 154 ] and Yedidia et al. [ 566 ]. Szeliski et al. [ 485 ] compared the performance
of loopy belief propagation with graph cuts using
-expansion (see the next section)
and several of their variants on a benchmark dataset of computer vision problems.
They also provided a freely available implementation using a common problem
specification (see http://vision.middlebury.edu/MRF ) .
Felzenszwalb and Huttenlocher [ 138 ] described several simple modifications to
the basic loopy belief propagation algorithm that dramatically speed up its conver-
gence for the graphs and cost functions typically encountered in computer vision
applications like stereo.
α
A.3
GRAPH CUTS AND α -EXPANSION
The main alternative to loopy belief propagation for minimizing energies of the form
of Equation ( A.5 ) is the use of graph cuts . We introduced graph cuts in Section 2.8
in the context of hard segmentation of an image; in this case, there are only two
labels (i.e., a pixel is either part of the foreground or part of the background). We also
discussed graph cuts with binary labels in Section 3.3 for finding good compositing
seams and in Section 3.5 for seam carving.
The main advantage of graph cuts in these two-label situations is that efficient
algorithms exist to globally minimize the Gibbs energy in the special case when the
smoothness term satisfies a Potts model , 2 namely
E i , j
smoothness
(
L
(
i
)
, L
(
j
)) =
0if L
(
i
) =
L
(
j
)
(A.11)
E i , j
smoothness
(
L
(
i
)
, L
(
j
)) =
V ij
if L
(
i
) =
L
(
j
)
Section 2.8.1 describes how to map a Gibbs energy in this form onto a graph with
weighted edges. To review, we begin with the set of nodes
used to define the Gibbs
energy function, and add two special terminal nodes that we call the source
V
S
and
the sink
. We assume the source is associated with label 0 and the sink is associated
with label 1 . 3 We also augment the set of edges
T
used to define the Gibbs energy
function, adding edges e i S and e i T between each regular node and each of the two
terminals. We put a nonnegative weight w ij on each edge e ij . The weights on each
edge are related to the data and smoothness terms of Equation ( A.5 ) by:
E
w i S =
E data (
L
(
i
) =
1
)
w i T =
E data (
L
(
i
) =
0
)
(A.12)
w ij =
V ij
2 More generally, Kolmogorov and Zabih [ 248 ] proved that a binary Gibbs energy function can be
minimized using graph cuts if and only if E i , j
smoothness ( 0, 0 ) + E i , j
smoothness ( 1, 1 ) E i , j
smoothness ( 0, 1 ) +
E i , j
. See that paper for details on how to handle non-Potts models.
3 This is a minor change from the previous section, where we assumed the labels were indexed
starting from 1.
smoothness (
1, 0
)
for each
(
i , j
)
 
Search WWH ::




Custom Search