Graphics Reference
In-Depth Information
However, unlike our interpretation in the binary labeling problem, after we compute
the minimum cut, all of the nodes separated from the source node are given the
label
, and all of the nodes separated from the sink node keep their current label.
This is a little counterintuitive and opposite our interpretation earlier in this section
and in Chapter 2 , but we maintain this notation to be consistent with the original
formulation. We can see that since all the nodes already labeled
α
α
are connected to
the sink node with infinite weight, the only outcome of solving the subproblem is that
some nodes not currently labeled
.
For the overall algorithmwith N labels, we iterate the following steps:
α
change their label to
α
1. Initialize the labeling L with random labels at each node.
2. For i
=
1,
...
, N
a)
Solve the
i . Finding the minimum cut
for this binary labeling problem produces a candidate labeling L .
α
-expansion subproblem with
α =
L )<
, replace L with L .
b)
If E
(
E
(
L
)
3.
If a cycle through all N labels has produced no improvement in the cost
function, stop and return the converged labels L . Otherwise, go to step 2.
-expansion
doesn't enjoy the global optimality guarantee that we have for the binary-label prob-
lem, it still has several desirable properties. First, it provably converges in a finite
number of iterations, and convergence is relatively fast since a large number of
spatially distant pixels can change their label simultaneously in each subproblem.
Second, Boykov et al. proved that the cost of the labeling at convergence is within a
known factor of the global minimum cost. In particular, for the multi-label Potts
model, the cost of the converged labeling is at most twice the global minimum
cost. This is an excellent result considering that finding the global minimum of the
multi-label problem is known to be NP-hard.
While the overall solution to the multi-label problem resulting from
α
A.4
NEWTON METHODS FOR NONLINEAR
SUM-OF-SQUARES OPTIMIZATION
Many computer visionoptimizationproblems canbe formulated as theminimization
of the sum of squared terms that depend nonlinearly on the parameters to be esti-
mated. For example, the projective transformation estimation problem in Section
5.1 , the bundle adjustment problem in Section 6.5.3 , and the inverse kinematics
problem in Section 7.4.2 can all be cast in this form. In this section, we briefly
review Newton methods for this type of optimization problem, leading up to the
Levenberg-Marquardt algorithm frequently used in computer vision problems.
We assume the functional we want to minimize has the form
( θ )) (
F
( θ ) = (
x
f
x
f
( θ ))
(A.16)
M is the vector of parameters we want to estimate and x
N is a given
where
θ ∈ R
∈ R
vector of measurements. The function f maps a choice of the parameters
θ
to a vector
N ; we interpret theminimization of Equation ( A.16 ) as trying to bring themodel of
the system specified by f as close as possible to the actual observedmeasurements x .
in
R
Search WWH ::




Custom Search