Image Processing Reference
In-Depth Information
wish is to manage the multiplicity without sacrificing the potential contributions of a
complex combination, even beyond what human experts can achieve today.
Therefore, the task of fusion in image processing is closely related to decision
making. On the other hand, it has little to do with the phase it is often associated with
geometric registration of images, but this phase is today generally considered to be
unavoidable, and many image fusion studies simply perform it and leave to a human
operator the task of implementing the decision making.
The objective of registration is to exactly overlap the pixels corresponding to a
same object observed in different images. This phase can be made easier if there is
a recognized absolute frame of reference to describe the scene. This is the case, for
example, in mapping or geography applications, which rely on geocoded frames of ref-
erence, as well as for medical applications, for which conventional anatomical frames
of reference have been established. There are many kinds of registration techniques
(see, for example, [ZIT 03]), based on different principles: correlation, dynamic pro-
gramming, optical flow, elastic deformation, etc. (see [MAI 91] and [MAN 94] for
summaries of the methods used in aerial and satellite imagery and medical imaging,
respectively).
For which objectives are we likely to use image fusion? First of all, for improving
the three main tasks of shape recognition, detecting, recognizing and identifying.
Detection. This consists in this case of validating the presence or absence of the
object we are searching for: presence of a vehicle on a road, or of a stenosis in a blood
vessel. This is sometimes combined with the other objective of tracking objects detect-
ed in a sequence of pictures.
Recognition or classification. A detected object is associated with one of the cate-
gories of known or expected objects based on photometric, geometric or morphologi-
cal criteria. This operation can be conducted on objects on very different levels, from
the pixel to complex sets of image components.
Identification. A detected and recognized object is identified when it is associated
with a single prototype in its category. Thus, once a vehicle has been detected with
infrared imaging, recognition can determine its type: truck, motorcycle or car, and the
conclusion of the identification will be that the vehicle is the milkman's truck, which
is the typical object monitored in this type of image . . .
However, applications other than shape recognition also require the implementa-
tion of fusion methods. These operations can take place during the recognition pro-
cess, but at a more preliminary stage, and do not necessary lead to a decision.
Segmentation. This constitutes a more focused objective than classification, since
it intends to extract determined objects as precisely as possible. It can consist simply
Search WWH ::




Custom Search