Global Positioning System Reference
In-Depth Information
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
E(c
x)
˜
=
cE(
x)
˜
(4.29)
The expected value (mean) of a constant equals the constant. Because the mean is a
constant, it follows that
E E(
x) = µ x
˜
(4.30)
Re lations (4.28) and (4.29) also hold for multivariate density functions, as can be seen
fro m (4.18). Let
y
˜
x 1
x 2 be a linear function of random variables, then
E (
x 1
˜
x 2 )
=
(x 1 +
x 2 ) f (x 1 ,x 2 ) dx 1 dx 2
−∞
−∞
[10
=
x 1 f (x 1 ,x 2 ) dx 1 dx 2 +
x 2 f (x 1 ,x 2 ) dx 1 dx 2
−∞
−∞
−∞
−∞
Lin
6.5
——
No
PgE
=
˜
+
˜
E (
x 1 )
E (
x 2 )
(4.31)
Thus, the expected value of the sum of two random variables equals the sum of the
individual expected values. By combining (4.28) and (4.31), we can compute the
expected value of a general linear function of random variables. Thus, if the elements
of the n
×
×
u matrix A and the n
1 vector a 0 are constants and
y
=
a 0 +
Ax
(4.32)
[10
then the expected value is
E ( y )
=
a 0 +
A E ( x )
(4.33)
Th is is the law for propagating the mean. The law of variance-covariance propagation
is as follows:
E y
µ y T
µ y y
Σ y
E y
E( y ) T
E( y ) y
=
E y
A E( x ) T
A E( x ) y
=
a 0
a 0
(4.34)
E [ Ax
A E( x ) ] T
=
A E( x ) ][ Ax
A E [ x
E( x ) ] T A T
=
E( x ) ][ x
Σ x A T
=
A
The first line in Expression (4.34) is the general expression for the variance-covariance
matrix of the random variable y according to definition (4.26);
µ y is the expected
 
Search WWH ::




Custom Search