Graphics Reference
In-Depth Information
between the frames, which in this context is called image registration . Opti-
cal flow is applied to register adjacent images, but before this can be done the
pixel brightness values have to be matched. This is accomplished by syntheti-
cally boosting the exposure of the image with the shorter exposure time to match
the exposure time of the other image. These exposure times are known because
the camera tags each captured image with exposure information, including the
exposure time, and the camera is radiometrically calibrated.
The HDR stitching for one frame, called the current frame , depends on the
adjacent frames, called the previous and next frames. From these frames, three
warped images are constructed: a previous-to-current image, a next-to-current
frame, and a bidirectional image, which is interpolated from the previous and next
frames and then warped to the current frame. The two unidirectionally warped
images are constructed using optical flow in one direction; the previous (or next)
frame is registered to the current frame and then warped toward the current frame.
The registration is done using a method similar to the pyramid-based optical flow
algorithm described in Chapter 5.
The process of creating the bidirectionally warped image is considerably more
complicated. It begins by constructing an image C by interpolating the previous
and next frames at the midpoint between them. That is, the previous and next
frames are both warped toward the current frame and the results are averaged
into C . The optical flow for this computation also uses a pyramid approach, but
the motion estimate uses linear transformations and translations, and the local
correction is symmetric. Ideally, the resulting interpolated image C would match
the current frame exactly; however, the motion estimation process is not perfect,
and even if it were, there is still the issue of camera shaking between frames.
The difference between C and the current frame actually provides a measure of
the consistency of camera motion, as it is constructed assuming constant velocity
between captured frames.
The optical flow used to construct C is improved by applying the correc-
tion necessary to match C to the current frame. This additional registration step
uses a hierarchical version of optical flow estimation, in which the motion model
in each region is a projective (perspective) transformation. This is a more con-
strained version of optical flow estimation. The constraint is necessary to reduce
the possibility of erroneous warping due to low contrast and saturated pixels. All
three interpolated images can be regarded as “stabilized” versions of the current
frame.
3. HDR recovery. All three interpolated images are used to recover the HDR
content of the current frame along with the current frame image itself. The camera
is radiometrically calibrated, so the pixels in all of these images can be converted
Search WWH ::




Custom Search