Digital Signal Processing Reference
In-Depth Information
M− 1
p ( s k r ) f ( r ) d r
=1
R k
k =0
p ( s k r ) f ( r ) d r ,
=1
R
where
R
is the union of the disjoint regions
R k . The third equality is obtained
by using Bayes' rule, that is, f ( r s k ) p ( s k )= p ( s k r ) f ( r ). So we have proved
that the average error probability is
p ( s k r ) f ( r ) d r .
P e =1
(5 . 55)
R
It is now clear that this is minimized by maximizing p ( s k r )foreach k .Sothe
MAP estimate also minimizes average error probabilities . Since the ML estimate
agrees with the MAP estimate when all p ( s k )areequal,the ML estimate also
minimizes error probabilty when all s k have identical probabilities.
5.5.4 The ML estimate in the Gaussian case
The maximum likelihood (ML) method finds an estimate s est from the mea-
surement r such that the conditional probability f ( r
s est ) is maximized. Now
assume that we have an additive white Gaussian noise or AWGN channel, that
is,
|
r ( n )= s ( n )+ q ( n ) , (5 . 56)
where q ( n ) is zero-mean white Gaussian noise with variance σ q . For a fixed
transmitted symbol s ( n )= s k , we see that r ( n ) is a Gaussian random variable
with mean s k . Its density function is therefore
f ( r s k )=
exp
,
s k ) 2
2 σ q
1
2 πσ q
( r
(5 . 57)
where the time argument ( n ) has been omitted for simplicity. Maximizing this
quantity is therefore equivalent to minimizing
D 2 ( r, s k )=( r
s k ) 2 .
(5 . 58)
That is, given the received sample r ( n )attime n , the best estimate of the symbol
s ( n )attime n would be that value s k in the constellation which minimizes the
distance D ( r, s k ) . Similarly, suppose we have received a sequence of K samples
r =[ r (0)
r (1)
...
r ( K
1) ]
(5 . 59)
and want to estimate the first K symbols
s =[ s (0)
s (1)
...
s ( K
1) ]
(5 . 60)
such that the conditional pdf
f ( r s k )
(5 . 61)
Search WWH ::




Custom Search