Graphics Reference
In-Depth Information
Fig. 13.6 The top row is the input rate and integration time from a commercial camera. The bottom
row is the coded-exposure sampling rate with varying exposure times
13.3.1 Coded Exposures
In the approach based on frame alignment, multiple lowdynamic range (LDR) images
of varying exposure times are integrated to form a high dynamic range (HDR) image.
Figure 13.6 provides an illustration of this approach. The shortest exposure LDR
subframe image is called the 'seed' image; its exposure time is given as t S . Other
images are exposed for longer and can be regarded as the combination of the unit
images with motion. Using this seed image approach, we do not require an optimized
shutter open/closed pattern, and as such, it works with different camera speed and
scene luminance. Using t S as the unit exposure time, T 2 ,T 3 ,…,T L as the subsequent
exposure times, we ensure that all of the images with longer exposures are multiples
of t S .
For the general case in which motion blur is significant due to long exposure
times, we create the output image by aligning and combining the short-exposure-
time LDR sub-frame images. Based on our experience in low light imaging, it is more
difficult to align the subframe images due to their unequal exposure times. Instead,
our approach is to first align images with equal exposure times, and then interpolate
the motions of other images of different exposure times. When there are multiple
exposure patterns involved in the capturing, or when there is a need to have higher
precision of motion interpolation, an accelerometer can be used to aid in motion
estimation and interpolation.
To further improve the quality of the output HDR image, we generate a map
of weighted values based on the camera's photopic response curve, and apply the
weights to pixel values of the LDR images. This is done to normalize and blend the
pixels, and the processing is done after alignment of the LDR images. This weight
function gives less weight to pixels with values close to both 0 and the pixel maximum
value, and more weight to pixels with midrange values. This mapping works well in
cases where the scene has well-defined spatial features, which need to be preserved
during fusion of LDR images.
Search WWH ::




Custom Search