Biomedical Engineering Reference
In-Depth Information
4.2.5.1
Exhaustive Search
The exhaustive search can be used to find the true global optimum solution.
This brute-force search is very expensive. If we want 0.1 pixels, 0.1 rotations,
and 0.001 scaling accuracy, we would have 10 10
iterations.
4.2.5.2
Simplex
The downhill simplex method implements an entirely self-contained strategy
and does not make use of any 1D optimization algorithm [31]. It requires only
function evaluation, not derivatives.
A simplex is a geometrical figure, consisting of N + 1 vertices in N dimen-
sions and all their interconnecting line segments, polygonal faces, etc. In the
optimization process, a simplex reflexes, expands, and contracts, around a ver-
tex, trying to enclose the optimum point within its interior.
The downhill simplex method must be started with N + 1 points, defining
an initial simplex. For our image registration problem, there are four unknown
registration parameters. Given an initial guess of the registration parameters
vector ( t x , t y , θ ,s ), a non-degenerate simplex can be formed by points ( t x + 1 , t y ,
θ ,s ), ( t x , t y + 1 ,θ, s ), ( t x , t y + 1 , s ), and ( t x , t y , θ, s + 0 . 1). In our implementa-
tion, the initial registration parameter vector is (0, 0, 0, 1), i.e., all translation
offsets are 0, rotation angle 0, and the scaling factor 1.
Note that we use different guesses for the problem's characteristic length
scale in different directions since we don't expect that the optimized scaling
factor will be significantly deviated from 1. We use degrees rather than radians
such that the characteristic length scale of rotation angle is comparable with
those for translations (1 radian amounts to 57.3 ). In Java a zero scaling factor
will generate an exception. In reality, a non-positive scaling factor does not make
any sense. From the characteristic of the retinal image registration problem,
one knows that the scaling factor falls in the range of 0.95-1.05. The simplex
optimization routine cannot guarantee that. Thus, at each iteration the proper
range of the scaling factor is checked. A relaxed lower and upper bounds is used.
We clamp the scaling factor at a lower bound of 0.9 and an upper bound of 1.10.
The termination condition in any multidimensional optimization routine can
be delicate. One can terminate the iterative process when the vector distance
moved in a step is fractionally smaller in magnitude than some preset tolerance.
Alternatively, one can require that the decrease in the function value in the
Search WWH ::




Custom Search