Image Processing Reference
In-Depth Information
Based on resolution: Images to be fused may not have the same spatial resolu-
tion. This case is more frequent in remote sensing community. When the spatial
resolution of the images is different, one is essentially enhancing the details in the
images with lower spatial resolution using the remaining images. This process is
also referred to as the pan-sharpening. In other cases, images have the same spatial
resolution. These multiband images could be obtained from different sensors, at
different time, or even at different spectral wavelength bands.
Based on nature of images: This categorization is somewhat different from the
previous ones. Here one is more concerned about the type of the data rather than
type or technique of fusion. The sources of images can be very different. Most of
the real world scenes encompass a very high dynamic range (HDR). Most of the
digital cameras are not able to capture these scenes due to their limited dynamic
range. However, one can capture multiple images of the scene with varying expo-
sure settings of the camera. This set of low dynamic range (LDR) images when
appropriately fused, generates a single image that provides an HDR-like appear-
ance [89, 145]. Such type of fusion is often regarded as multi-exposure image
fusion. Similarly, the finite size of the aperture of the camera leads to defocused
objects in the image. Due to the physics behind the camera lens, only the regions
at a certain distance from the focal plane of the camera can be captured in focus
for a given setting of the camera focus. To obtain a single image where all objects
are in focus, we may capture multiple images by suitably varying the focus of the
camera, and fuse them later. This multi-focus image fusion operates on different
principles than those of multi-exposure images due to the difference in the for-
mation of these images. In remote sensing, one often comes across multispectral
image fusion where typically 4-10 bands of a multispectral image are combined to
yield a compact description of the scene. Advanced hyperspectral imaging sensors
capture the scene information in hundreds of bands depicting the spectral response
of the constituent materials of the scene. Hyperspectral image fusion refers to the
combining of these bands into a single image that retains most of the features from
input hyperspectral bands. In the subsequent chapters of the monograph, we shall
explore different techniques of hyperspectral image fusion. The medical commu-
nity makes use of images obtained from different sensors (e.g., Positron emission,
X-rays) providing complementary information about the scene. The fused image
combines and enhances features from the input images which has proved to be
quite useful in medical diagnosis. This class of fusion is referred to as multi-modal
image fusion.
Based on processing level: One may consider the image as a two dimensional
array of individual pixels, or as a set of certain features relevant to the application
of interest. Alternatively, one may look at the image as the means of providing a
decision, such as diagnosis of a disease, presence of security threatening objects, or
existence of water bodies within some areas. Accordingly, images can be fused at
pixel-level, feature-level, or decision-level. We can categorize fusion techniques
with respect to the level of processing at which the actual fusion takes place.
This categorization is particularly important because it decides the level of image
representation where the actual fusion takes place. A similar categorization can
 
Search WWH ::




Custom Search