Image Processing Reference
In-Depth Information
- PDAF (Probabilistic Data Association Filter) [BAR 74, BAR 80] in which all
of the validated measurements are assigned to the track. In this case, we conduct a
weighted mean combination, in agreement with the theorem of total probabilities.
In this case, we hope for a uniform spatial probabilistic distribution of false alarms.
These can thus have a statistically isotropic influence and therefore be filtered over the
course of the iterations in time. This method is therefore adapted to cases with higher
numbers of false alarms.
What should be understood at this level is that it is necessary to jointly take into
account both the estimation and mechanisms for managing uncertainties, by explicitly
displaying a measurement of what is believed to be true for each uncertainty that has
to be managed (for example, the association between a validated measurement and a
track). Several measurements have been considered, the most common of which are:
- Fisher information [FIS 12], which relies on the inverse of a covariance matrix
[MAN 92];
- Shannon information, obtained from the likelihood algorithm of a probability
distribution [MCI 96];
- Kullback-Leibler information [KUL 59] or cross entropy, which measures the
distance between two probability distributions. A discrimination gain [KAS 96,
KAS 97] can be calculated between the density predicted when no observation is made
on the target and the density predicted if one particular sensor is handling it.
2.2.3. Controlling and supervising a data fusion chain
Another generic method for designing operational systems is to supervise the data
processing chain. This chain is assumed to be adaptive, for example, the behavior of
moving targets are governed by three competing dynamic models and a mechanism
needs to be implemented to deal with the competition between these three models.
Two types of methods are found in other works: by alternately switching from one
model to another according to criteria that need to be defined [ACK 70] or by making
the different models interact in a probabilistic framework [BLO 89]. More generally,
the objective is to control the sequence of the various processes by assuming that
other processes are conducted in parallel, then by deciding afterwards which process
is optimal, or by defining a processing chain comprised of several steps, each step
itself controlled by a set of competing models. We then have to supervise which model
controls the current processing step.
This problem of dynamically affecting resources is not, strictly speaking, spe-
cific to the topic of data fusion. It exists wherever a sufficiently large number of
parameters have to be supervised in order for a system to function in an optimal or
sub-optimal way. This is the case in particular in the field of multi-agent systems
[FER 88, GAS 92]. However, there are many specificities to a multi-sensor system.
Search WWH ::




Custom Search