Digital Signal Processing Reference
In-Depth Information
where g
(
n
) =
w
(
n
)
h
(
n
)
. Then, the equalizer output can be rewritten as
y
(
n
) =
g
(
n
)
s
(
n
) =
g
(
0
)
s
(
n
) +
g
(
n
i
)
s
(
i
)
i
=−∞
, i
=
0
=
g
(
0
)
s
(
n
) +
η
(
n
)
(4.8)
where η
is the so-called convolutional noise [135], which is null only if the
ZF condition is attained.
If the pdf of η
(
n
)
(
n
)
is known beforehand, the maximum likelihood (ML)
ˆ
s
(
n
)
estimate
(vide Section 2.5) of the transmitted symbol is given by
ψ y
) =
p y y
)
ˆ
s
(
n
) ML =
(
n
arg max
s
(
n
) |
s
(
n
(4.9)
(
n
)
where p y y
) is the conditional distribution of the equalizer output
given the transmitted signal s
(
n
) |
s
(
n
. It should be noted that this conditional dis-
tribution depends on the channel and the equalizer, which are unknown.
Thus, the derivation of an ML estimator depends on additional assump-
tions that provide an adequate characterization of the convolutional noise
distribution [35].
A first simplifying assumption is that the convolutional noise presents a
Gaussian distribution, which can be justified in terms of the central limit
theorem [230], if we consider that the combined response g
(
n
)
is long
enough. In this case, the ML estimator becomes the minimum variance
estimator [135,166], given by
ψ y
(
n
)
) =
E s
)
(
n
(
n
) |
y
(
n
(4.10)
We have that
μ η =
E
{
η
(
n
) } =
0
(4.11)
and
E η
2
E s
2
σ η =
2
(
n
)
=
(
n
)
g
(
i
)
(4.12)
i
=
0
Thus, we can consider that
exp
2
1
η
(
n
)
p η (
η
(
n
)) =
2
(4.13)
η
π
σ η
The output signal y
(
n
)
is simply the sum of g
(
0
)
s
(
n
)
and η
(
n
)
. Hence,
its pdf equals the convolution between p η (
η
(
n
))
and the pdf of g
(
0
)
s
(
n
)
.
 
Search WWH ::




Custom Search