Image Processing Reference
In-Depth Information
linearisation of the coverage of the space in between the two contours, but these can make
implementation much more complex.
The approach again uses regularisation , where the snake energy is a discrete form to
Equation 6.40 so the energy at a snake point (unlike earlier formulations, e.g. Equation
6.11) is
E ( v i ) = λ E int ( v i ) + (1 - λ) E ext ( v i )
(6.43)
where the internal energy is formulated as
2
|
v
- 2
v
+
v
|
i
+1
i
i
-1
E
() =
v
(6.44)
int
i
|
v
-
v
|
i
+1
i
-1
The numerator expresses the curvature, seen earlier in the Greedy formulation. It is scaled
by a factor that ensures the contour is scale invariant with no prejudice as to the size of the
contour. If there is no prejudice, the contour will be attracted to smooth contours, given
appropriate choice of the regularisation parameter. As such, the formulation is simply a
more sophisticated version of the Greedy algorithm, dispensing with several factors of
limited value (such as the need to choose values for three weighting parameters: one only
now need be chosen; the elasticity constraint has also been removed, and that is perhaps
more debatable). The interest here is that the search for the optimal contour is constrained
to be between two contours, as in Figure 6.8 . By way of a snake's formulation, we seek the
contour with minimum energy. When this is applied to a contour which is bounded, then
we seek a minimum cost path. This is a natural target for the well-known Viterbi (dynamic
programming) algorithm (for its application in vision, see, for example, Geiger (1995)).
This is designed precisely to do this: to find a minimum cost path within specified bounds.
In order to formulate it by dynamic programming we seek a cost function to be minimised.
When we formulate a cost function C between one snake element and the next as
C i ( v i +1 , v i ) = min[ C i -1 ( v i , v i- 1 ) +
) E ext ( v i )] (6.45)
In this way, we should be able to choose a path through a set of snakes that minimises the
total energy, formed by the compromise between internal and external energy at that point,
together with the path that led to the point. As such, we will need to store the energies at
points within the matrix, which corresponds directly to the earlier tessellation. We also
require a position matrix to store for each stage ( i ) the position ( v i -1 ) that minimises the
cost function at that stage ( C i ( v i +1 , v i )). This also needs initialisation to set the first point,
C 1 ( v 1 , v 0 ) = 0. Given a closed contour (one which is completely joined together) then for
an arbitrary start point, we separate the optimisation routine to determine the best starting
and end points for the contour. The full search space is illustrated in Figure 6.9 (a). Ideally,
this should be searched for a closed contour, the target contour of Figure 6.8 . It is
computationally less demanding to consider an open contour, where the ends do not join.
We can approximate a closed contour by considering it to be an open contour in two stages.
In the first stage, Figure 6.9 (b), the mid-points of the two lines at the start and end are taken
as the starting conditions. In the second stage, Figure 6.9 (c), the points determined by
dynamic programming half way round the contour (i.e. for two lines at N /2) are taken as
the start and the end points for a new open-contour dynamic programming search, which
then optimises the contour from these points. The premise is that the points half way round
the contour will be at, or close to, their optimal position after the first stage and it is the
points at, or near, the starting points in the first stage that require refinement. This reduces
the computational requirement by a factor of M 2 .
λ
E int ( v i ) + (1 -
λ
Search WWH ::




Custom Search