Global Positioning System Reference
In-Depth Information
The increments
X ,
Y ,
Z are defined as
X i , 1 =
X i , 0 +
X i ,
Y i , 1 =
Y i , 0 +
Y i ,
(8.19)
Z i , 1 =
Z i , 0 +
Z i .
They update the approximate receiver coordinates. So the Taylor expansion of
f
(
X i , 0 +
X i ,
Y i , 0 +
Y i ,
Z i , 0 +
Z i )
is
Z i , 0 ) +
f
(
X i , 0 ,
Y i , 0 ,
Z i , 0 )
(
X i , 1 ,
Y i , 1 ,
Z i , 1 ) =
(
X i , 0 ,
Y i , 0 ,
f
f
X i
X i , 0
+
f
(
X i , 0 ,
Y i , 0 ,
Z i , 0 )
Y i + ∂(
X i , 0 ,
Y i , 0 ,
Z i , 0 )
Z i .
(8.20)
Y i , 0
Z i , 0
Equation (8.20) includes only first-order terms; hence the function determines an
approximate position. The partial derivatives in Equation (8.20) are
X k
f
(
X i , 0 ,
Y i , 0 ,
Z i , 0 )
X i , 0
=−
,
k
i
X i , 0
ρ
Y k
f
(
X i , 0 ,
Y i , 0 ,
Z i , 0 )
Y i , 0
=−
,
k
i
Y i , 0
ρ
Z k
f
(
X i , 0 ,
Y i , 0 ,
Z i , 0 )
Z i , 0
=−
.
k
i
Z i , 0
ρ
k
i
Let
0 be the range computed from the approximate receiver position; the first-
order linearized observation equation becomes
ρ
,
X k
Y k
Z k
X i , 0
Y i , 0
Z i , 0
P i
k
i
= ρ
0
X i
Y i
Z i
,
k
i
k
i
k
i
ρ
ρ
ρ
,
0
,
0
,
0
dt k
T i
I i
e i ,
+
c
(
dt i
) +
+
+
(8.21)
where we explicitly have
k
i
X k
2
Y k
2
Z k
2
ρ
0 =
(
X i , 0 )
+ (
Y i , 0 )
+ (
Z i , 0 )
.
(8.22)
,
8.5.3 Using the Least-Squares Method
A least-squares problem is given as a system A x
=
b with no solution. A has
m rows and n columns, with m
n ; there are more observations b 1 , ..., b m
than free parameters x 1 , ..., x n . The best choice, we will call it
>
x , is the one that
ˆ
minimizes the length of the error vector
e
ˆ
=
b
A
x . If we measure this length in
ˆ
Search WWH ::




Custom Search