Digital Signal Processing Reference
In-Depth Information
where e
=
d
C
w is the error vector at the optimal
ˆ
w . In a straightforward manner
ˆ
we can obtain:
C T MC
C T Md
w
ˆ
=
,
(5.32)
which is the normal equation for this problem. At this point, the same considerations
made for the LS problem can be made in this case. To keep things simple we will
consider that M 1 / 2 C is a full column rank matrix. 7 In that case C T MC is invertible
and the optimal
w can be obtained as:
ˆ
C T MC 1
C T Md
w
ˆ
=
(5.33)
The unique optimal d opt can be written as:
C C T MC 1 C T Md
d opt =
,
(5.34)
which means that the projection matrix P M [ S
(
C
)
] can be put as:
C C T MC 1 C T M
P M [ S
(
C
)
]
=
.
(5.35)
Except for the symmetry condition, P M [ S
(
C
)
] has the properties mentioned previ-
ously for P [ S
(
C
)
].
5.4 Some Statistical Properties of the LS Solution
The appealing nature of the LS formulation ( 5.4 ) is reinforced by some interesting
and useful properties. In this section we will mention some of them without proof. 8
The interested reader can see [ 7 ] and [ 9 ] for a more detailed analysis. Consider ( 5.2 ),
rewritten below for ease of reference:
d
=
Cw T +
v
.
(5.36)
Let us consider v
=
[ v
(
0
)
v
(
1
)...
v
(
n
1
)
], where v
(
i
)
is a sequence of zero-mean
and i.i.d. random variables with E v 2
) = σ
2
v . Although the matrix C could be
composed of observations of a random process, it will be assumed fixed and perfectly
known. That is, the only randomness in the model is due to v
(
i
(
i
)
. The LS solution
w
ˆ
can be thought as an estimate of w T . The following holds:
1. The LS solution
ˆ
w is an unbiased estimator. That is:
7 Notice that as M is a symmetric positive definite matrix, its square root is well defined [ 3 ].
8 We will restrict ourselves to the regular LS. Similar properties for the more general WLS also
exist.
Search WWH ::




Custom Search