Graphics Reference
In-Depth Information
camera. View synthesis is a major problem in image-based rendering , the general
problem of using multiple images of an environment to create realistic new views
from different perspectives.
When the original images are close together either in space or time, the view
synthesis problem is fundamentally one of interpolation. A common application is
changing the frame rate of video, for example from twenty-four frames per second
to thirty frames per second. Since adjacent images are very similar, the optical flow
vectors are small and easily estimated, and occlusions are unlikely to occur. Amorph-
ing/warping algorithm like the one described in the previous section will do a fairly
good job of generating the interpolated images.
However, when the images are far apart, the morphing algorithm will not gener-
ate interpolated views that look like they could have been generated from an actual
camera, evenwhen the correspondence has no errors. Figure 5.26 illustrates an exam-
ple of the same planar object imaged from two different perspectives. If we apply
Equations ( 5.61 )-( 5.62 ) using the correct correspondence fields defined by the under-
lying projective transformation, we obtain distorted intermediate images that do not
match our intuition for a natural interpolation. For example, we expect that straight
lines should remain straight in the interpolated images, whereas they clearly bend
in Figure 5.26 . The fundamental problem is that the morphing algorithm does not
guarantee that new intermediate images follow the rules of perspective projection of
the underlying scene. We call a view synthesis algorithm that does follow these rules
physically consistent .
Chen andWilliams [ 88 ] observed that themorphing algorithmwas physically con-
sistent if (1) the image planes corresponding to the source and synthesized views
were all parallel with each other, and (2) the source and synthesized camera cen-
ters are collinear, corresponding to camera motion parallel to the image planes. This
situation is illustrated in Figure 5.27 a. They called this special case of the algorithm
view interpolation . As we can see in Figure 5.27 b, the intermediate result from view
interpolation is physically valid. That is, these synthesized images look like they were
taken with a real camera.
When the images have been taken from cameras that are far apart, we need a
method for dealing with folds and holes in the view synthesis result, as illustrated in
Figure 5.28 . Folds are introduced when a pixel in the synthesized view is consistent
withmultiple correspondences between the two source images. In this case, the pixel
can take its intensity from the correspondence with the largest disparity (i.e., the sur-
face closest to the camera). Holes are introducedwhen a pixel in the synthesized view
has no match in one or both of the source images. In this case, we can interpolate the
intensities across the hole (e.g., using image inpainting techniques from Section 3.4 ).
t = 0
t = 0.25
t = 0.5
t = 0.75
t = 1
Figure 5.26. Directly morphing between two very different views of the same object (left
and right images) can result in unrealistic intermediate images, even though the supplied
correspondence fields contain no errors.
Search WWH ::




Custom Search