Digital Signal Processing Reference
In-Depth Information
Fig. 16.5 Pyramidal matching ( top ) and corresponding correlation ( bottom ). From left to right :
lowest level, intermediate level, and highest level
(a) the uncertainty of the disparity is calculated over the window centered on the cur-
rent pixel; (b) the window is extended in four orthogonal directions (
y)
and the uncertainty of the disparities of the four new windows is calculated, (c) the
window is extended in the direction of the smallest uncertainty.
Region growing proceeds as follows: Image seeds, such as interest points, are
selected based on their correlation scores. Local geometric constraints are used to
guide the matching process. The final matches are checked by estimating the global
mapping function. A match is accurate if it agrees with the match predicted by
the mapping function. Pyramidal matching methods [ 17 , 34 ] combine graphs and
series' operations. They use images at different resolutions in order to reduce the
time complexity of matching. Figure 16.5 shows the result of the correlation within a
3-level pyramid of the same image; at each level, a single correlation kernel is used.
The matched features at the lower resolution levels guide the search for matching
features at higher resolutions. The main disadvantage of pyramidal multiresolution
approach is that matches may be lost entirely at the low resolution levels and may
not be found during the progress to the high resolution levels. An edge improvement
technique for pyramids has been proposed in [ 66 ].
Correlation-based methods have several drawbacks including sensitivity to noise,
scene clutter, variations in texture, occlusions, perspective distortions, illumination,
and view angle changes [ 58 ]. However, they are efficient in many applications.
Several probabilistic approaches to image matching have been proposed recently.
Maybank [ 61 ] obtains two feature vectors v( 1 ) , v( 2 ) from regions in the first and
second images, respectively. Let B be the background hypothesis that v( 1 ) and v( 2 )
are obtained from independent image regions and let H be the hypothesis that v( 1 )
and v( 2 ) are obtained from matching image regions. The probability density func-
tions p(v( 1 ),v( 2 )
+
/
x,
+
/
|
|
H) are learnt using one or more training im-
ages. The saliency of a given image region with feature vector v( 1 ) is, by definition,
equal to the Kullback-Leibler divergence of p(v( 2 )
B) and p(v( 1 ),v( 2 )
|
v( 1 ),B) from p(v( 2 )
|
v( 1 ),H) .
Search WWH ::




Custom Search