Image Processing Reference
In-Depth Information
is as follows:
z t =
e
(
Y
,t )+
n t
[6.48]
cpa/v, t cpa ] t .
where
Y
=[ α
This means that, based on the data provided by a single sensor, it is only possible
to estimate 3 parameters out of 4. Centralized processing then consists of considering
the observations provided by the sensors and to process them together. If we denote
by
t
t
2 ) t ) the vector of concatenated measurements, we have:
Z
(
Z
=(
Z
1 ,
Z
= E Y 1 (
)
+ N 1
N 2
,
X
E Y 2 (
)
Z
=
E
(
X
)+
N
[6.49]
X
where:
= cpa 1 ,v,θ,t cpa 1 t ,
X
)= α 1
cpa 1 /v,θ,t cpa 1 t ,
Y 1 (
X
[6.50]
)= α 2
cpa 2 /v,θ,t cpa 2 t ,
Y 2 (
X
α 2 = α 1
( d/v )sin θ, t cpa 2 = t cpa 1 +( d/v )cos θ.
This tells us that it is possible to achieve both detection and complete target
motion analysis. However, it can easily be shown [DON 00] that the computational
load becomes prohibitive. This is because the parameter discretization has to be
fine enough. Furthermore, because of the context (scale detection of a network),
the robustness of such a process is very problematic. One possible solution is to
use decentralized processing. Thus, we can consider fusion on the target motion
analysis level. The objective is then to estimate the complete parameter vector
X
=(cpa ,v,θ,t cpa ) t ) from partial vectors estimated on the sensor level.
(
X
We then have the following geometric relations:
cpa 2
v
d
v sin( θ ) ,
= α 2 = α 1
[6.51]
t cpa 2 = t cpa 1 + d
v cos( θ ) ,
and we assume that the partial estimate densities are governed by the following laws:
α
θ 1
t cpa 1
α 1
θ
t cpa 1
−→ N
1
α
θ 2
t cpa 2
α 2
θ
t cpa 2
−→ N
2
.
Search WWH ::




Custom Search