Image Processing Reference
In-Depth Information
tissue show up on images that measure this type of characteristic, resulting in doubts
as to whether a particular point belongs to one kind of tissue or the other, which is
an uncertainty due to both the phenomenon and the sensor. The delocalization of the
spatial information, which is due to the fact that all of the information contained in a
volume is grouped together in a same pixel, is caused by the sensor and its resolution,
and constitutes an imprecision regarding the location of the information in the image
(partial volume effect). Gibbs phenomena at the level of clear transitions, occurring in
MRIs or radar imagery, for example, are a source of imprecision caused by the dig-
ital reconstruction algorithms used on the images. The representation of (symbolic)
information in schematic form (with maps or atlases) is a source of both impreci-
sions and uncertainties. These are then magnified by the primitives extracted from the
images, used as the basis for the fusion. The most familiar example is contour detec-
tion using Gaussian filters on different scales: when increasing the Gaussian's standard
deviation, we get a higher certainty regarding the presence of contours, but we lose
accuracy regarding their location. This antagonism between accuracy and certainty
has been well identified as a characteristic trait of the approach in shape recognition
[SIM 89]. This antagonism often gives rise to contradictions in image fusion, since
there are several measurements available for one event: if the data is accurate, then
it is probably uncertain, and might contradict itself; if the certainty is increased, this
often comes at the price of more imprecision, which renders the data less informa-
tive if this imprecision is too great. Fusion therefore requires a decision system for
explicitly managing uncertainty and imprecision in order to avoid inconsistencies.
Imprecision is not a feature specific to the data, but it can be related to the objec-
tives and goals, especially if they are expressed in a vague linguistic form.
Finally, the spatial nature of information, specific to image processing, deserves
particular attention. Its introduction in fusion methods, often inspired by other fields
lacking this spatial nature, is not immediate and yet necessary in order to ensure the
spatial consistency of the results. Imprecision is also present at this level. On a low
level, it consists of problems of registration or partial volume, for example. At a higher
level, it consists, for example, of relations between objects that can be intrinsically
vague or poorly defined (such as a relation like “to the left of”).
As with other fusion applications, redundancy and complementarity between the
images we wish to fuse are assets in reducing imperfections such as uncertainty and
imprecision, clearing up ambiguities, completing information, solving conflicts. Here
are a few examples of complementarity in image fusion:
- involving the information itself: hidden parts that can be different in depth
images or aerial images;
- involving the type of information: anatomical information or functional infor-
mation for the same subject in different imaging modes;
Search WWH ::

Custom Search