Digital Signal Processing Reference
In-Depth Information
is minimized. Defining
r
=[
r
(0)
r
(1)
...
r
(
K
−
D
−
1) ]
we can therefore say that
y
est
is a minimum distance estimate of
y
based on
r
. When the noise is AWGN, this estimate is therefore an ML estimate (Sec.
5.5.4). Thus
f
(
r
y
est
)
f
(
r
y
)
≥
(5
.
79)
for any other feasible noise-free output vector
y
.
Observe now that the vector
y
is related to the transmitted symbol vector
s
as follows:
⎡
⎤
⎡
⎤
⎡
⎤
y
(0)
y
(1)
.
y
(
N
)
h
(0)
0
...
0
s
(0)
s
(1)
.
s
(
N
)
h
(1)
h
(0)
...
0
⎣
⎦
⎣
⎦
⎣
⎦
=
,
(5
.
80)
.
.
.
.
.
.
h
(
N
)
h
(
N
−
1)
...
h
(0)
y
s
A
where
N
=
K
0
.
Since
s
(
n
)
belong to a constellation with
M
possible discrete values, the vector
s
can take
M
N
+1
discrete values. So the output vector
y
can take at most
M
N
+1
discrete
values. In fact it takes exactly
M
N
+1
distinct values assuming that the matrix
above is nonsingular, that is,
h
(0)
=0.
4
Thus, even though each sample
y
(
n
)
caninprinciplehavemanymorethan
M
possible values (because it is a linear
combination of the samples
s
(
n
−
D
−
1
.
In fact Eq. (5.80) is true for any
N
≥
k
)), the vector
y
comes from a set with precisely
M
N
+1
discrete values. A number of points should now be noted.
−
1.
ML property.
Since each
s
from the discrete set maps to a unique
y
and
vice versa, we see that (5.79) also implies
f
1
(
r
s
est
)
f
1
(
r
s
)
,
≥
(5
.
81)
where
f
1
(
.
.
) represent the conditional pdf for
r
given
s
. Thus, the fact
that
y
est
is an ML estimate of
y
implies that
s
est
is an ML estimate of
s
(based on the received vector
r
).
|
2.
MAP property.
Next, assume that the symbols
s
(
n
) are independent and
identically distributed, with identical probabilities for all symbols in the
constellation. Then the
M
N
+1
discrete values of
s
have identical probabil-
ities. Thus the ML property of the estimate
s
est
also implies that it is an
MAP estimate (Sec. 5.5.1).
3.
Error-event probability.
From the discussion of Sec. 5.5.3 it therefore follows
that this estimated vector
s
est
has the minimum error probability property.
We simply say that the error-event probability has been minimized [Forney,
1972]. That is, the probability of error in the estimation of the vector,
viewed as one entity, is minimized.
4
If
s
1
and
s
2
are two distinct values of
s
then
y
1
−
y
2
=
A
(
s
1
−
s
2
). If
s
1
−
s
2
=
0
and
A
is nonsingular, it follows that
y
1
−
y
2
=
0
.
So
y
takes
M
N
+1
distinct values like
s
.
Search WWH ::
Custom Search