Graphics Reference
In-Depth Information
pixel matching is of little use in this region. Such areas can be avoided by using a
form of feature extraction : pixel matching is limited to parts of the image where
there is notable variation in neighboring pixels, e.g., at corners and edges. Fig-
ure 5.6(c) shows the regions that result from the feature extraction method em-
ployed by the authors. Limiting the pixel matching to these regions helps avoids
mismatches. Part (d) of the figure shows the corresponding offset vectors, col-
ored by probability. The vectors shown in lighter gray have lower pixel matching
probability.
The second part of the algorithm, the motion consistency estimation, consid-
ers how well the offset vectors match between neighboring pixels. This infor-
mation is used to construct a smooth final correspondence vector field. When
the offset vectors match well between pixels in a region, it can be expected that
the average offset vector is a good representation of the general optical flow. One
thing that might be done is to fit the offset vectors to a global projective transfor-
mation between the two images, which would correspond to the different camera
positions. However, the images are not assumed to be corrected for camera dis-
tortion, so the authors use a more general technique. The approach fits a locally
smooth function to the offset vectors using a method known as locally weighted
linear regression . The pixel-match probability serves as the weight for the off-
sets: vectors for which the pixel-match probability is highest are given the most
weight.
The authors employ an iterative approach to computing the final correspon-
dence vector field. The initial matches are smoothed by the regression, and then
the pixel-matching probabilities are recomputed from these smoothed vectors
(other techniques from computer vision are applied in the process). Figure 5.6(e)
shows the result. Repeating this process has the effect of improving the corre-
spondence field and removing the incorrect offset vectors ( Figure 5.6(f) ) , so the
process is repeated until no more improvement results. At this point, the final
correspondence field can be constructed ( Figure 5.6(g) ). The difference between
the actual secondary image and the estimated image computed by extrapolating
the correspondence field is shown in Figure 5.6. The pixels are darkest where they
match. As expected, no matches are found for pixels on the teapot.
5.1.5 Stereo Reconstruction
As described previously, stereo correspondence is the problem of matching points
between images. Once points are matched, the difference in their positions can
be used to approximate the 3D position of the point in the scene to which they
correspond. This is the problem of stereo reconstruction . The basic problem is to
find the 3D coordinates of a point x in an environment that projects to point x 1 and
Search WWH ::




Custom Search