Digital Signal Processing Reference
In-Depth Information
function in (2.26) is often hampered by the existence of local maxima. Therefore, it is
desirable that effective initialization techniques are used in conjunction with the ML
estimation.
2.3.1 Training-Based Channel Estimation
The training-based channel estimation assumes the availability of the input vector s (as
training symbols) and its corresponding observation vector Y . When the noise samples
are zero mean, white Gaussian, i.e., v is a zero-mean Gaussian random vector with cova-
riance σ v 2 I T B N , the ML estimator defined in (2.26), with θ = H, is given by
2
ˆ
H
=
argmin
Y
T
()
s
HT
=
() ,
s
Y
(2.27)
H
where T ( s ) is the Moore-Penrose pseudo-inverse of the T( s ) defined in (2.25). This is also
the classical linear least squares estimator, which can be implemented recursively, and
it turns out to be the best (in terms of having minimum mean square error) among all
unbiased estimators, and it is the most efficient in the sense that it achieves the Cramer-
Rao lower bound. Various adaptive implementations can be found in [42].
2.3.1.1 Time-Variant Channels
In case of general time-varying channels represented by (2.8), a simple generalization of
[4] (see also [ 36 ]) is to use a periodic Kronecker delta function sequence as training:
δ
sn
()
=
(
n P
− .
)
(2.28)
j
With (2.28) as input to model (2.8), one obtains
yn
()
=
hnnjPvn
(
; −+
)
()
,
(2.29)
j
so that if P > L , we have for 0 ≤ i L ,
ykPi
(
+=
)
hkPii
(
+; + + .
)
vkP
(
i
)
(2.30)
j
Therefore, one may take the estimate of h ( kP ; i ) as
ˆ (
hkPi
;= += +; + + .
)
ykPi hkPii
(
)
(
)
vkP
(
i
)
(2 . 31)
For time samples between kP ( k is an integer), linear interpolation may be used to obtain
channel estimates.
 
 
Search WWH ::




Custom Search