Image Processing Reference

In-Depth Information

the system's state to infer the instrumental variables by an iterative algorithm. The

pseudo-measurements
z
k
are defined by:

z
k
=
r
o,x
(
k
)cos
β
k
−

r
o,y
(
k
)sin
β
k

[6.17]

This equation can also be written in matrix form:

z
k
=
˜

X
0
(
k
)=
˜

A
∗
(
k
)

A
∗
(
k
)

X
T
(
k
)+
η
k
,

where
r
o,x
,r
o,y

represents the observer's co-ordinates and:

A
∗
(
k
)=
cos
β
k
,

sin
β
k
,
0
,
0
,

˜

−

[6.18]

η
k
=
r
k
sin
w
k

r
k
w
k
.

We are then led to consider a linear regression problem for which there is an

explicit solution. However, the solution obtained this way is usually strongly biased

because of the correlation between the columns of the regression matrix and the addi-

tive noise. The
IVM
consists of replacing the usual optimality equation for minimizing

the quadratic norm of the error with the following iterative expression:

X
+1
=

A
p
−
1

Z
p
.

A
p
X
R
−
2
˜

A
p
X
R
−
2

[6.19]

This is an even simpler method for implementing the Gauss-Newton algorithm.

The convergence of these two methods has been debated at length; there are, however,

methods that lead to results with a certain degree of generality. More precisely, if we

consider the quadratic functional of
X

:

L
X
=
X
−
X

2
,

[6.20]

Iltis and Anderson [ILT 96] show that this is a Lyapunov functional [BAR 85] for the

continuous differential equation:

G
X

d

dt
X

=

[6.21]

(
X

) is the gradient vector of the likelihood in
X

where

G

. After some simple calcula-

tions, we get:

⎛

⎞

β
1
−
β
1

.

β
p
−
β
p

G
X
=
H
∗
X

⎝

⎠

Search WWH ::

Custom Search