Digital Signal Processing Reference
In-Depth Information
Evidently the second term on the right-hand side of Eq. (F.6) is Hermitian
and positive semidefinite (all eigenvalues
0). So the error correlation
matrices satisfy 1
E [ ee ]
E [ e e
] .
(F . 7)
Thus the error correlation matrix for an arbitrary linear estimate is “at
least as large as” the error correlation matrix of the optimum estimate.
This is something we may not have guessed from the property E [ e e ]
E [ e e ] of the estimator (because u u v v does not in general imply
uu vv ; for example, try u =[10] T
and v =[01] T ).
3. Orthogonality . From Eq. (F.4) we know that for optimality of the esti-
mate x , every component of the error e must be orthogonal to every
component of the observation y , that is,
E [( e ) i y k ]=0 ,
for all i and k.
Orthogonality therefore implies that
E [ e x
]= E [ e y A ]= 0 .
Thus the error is orthogonal to the estimate itself. It then follows that
E [ e x ]=0 .
4. Right-triangle analog. From x =
x e we can verify using orthogonality
that the correlation matrices for the optimal estimator satisfy
E [ xx ]= E [ x x ]+ E [ e e ] .
(F . 8)
Taking the trace on both sides, this implies in particular that
x x ]+ E [ e e ] .
E [ x x ]= E [
(F . 9)
Thus the mean square value of the estimate and that of the error add up
to the mean square value of the original variable x . Thesameistrueof
the corresponding correlation matrices. It is as if x is the hypotenuse of a
right triangle whose base is
x and height is e . See Fig. F.1.
1 The notation A B for two Hermitian matrices A and B simply means that A B is
positive semidefinite.
Search WWH ::




Custom Search