Digital Signal Processing Reference
In-Depth Information
The maximization step can be solved using the forward-backward algorithm, as we
verify presently. First, let us develop
log Pr(
y
,
Jju
0
)
¼
log Pr(
yjJ
,
u
0
)
þ
log Pr(
Jju
0
)
:
Here we note that, given the state transition sequence
J
and parameter vector
u
0
, suc-
cessive channel outputs (
y
i
) are conditionally independent because the channel noise
is white and Gaussian. The mean of
y
i
, denoted by
H
0
(
j
i
), is the noise-free channel
output at time
i
using channel coefficients
h
0
for the given extended state configuration
j
i
at time
i
. Thus we have
:
2
p
s
0
)
N
Y
N
[
y
i
H
0
(
j
i
)]
2
2
s
0
2
1
(
Pr(
yjJ
,
u
0
)
¼
exp
i¼
1
We note also that Pr(
Jju
0
)
¼
Pr(
J
) since the state transition sequence
J
depends on
the channel input sequence (
d
i
), but not on the channel coefficients. Our development
for log Pr(
y
,
Jju
0
) thus reads as
log Pr(
y
,
Jju
0
)
¼
log Pr(
yjJ
,
u
0
)
þ
log Pr(
Jju
0
)
¼
X
i
[
y
i
H
0
(
j
i
)]
2
2
s
0
2
log (2
p
)
2
þ
log
s
0
þ
þ
log Pr(
J
)
:
Inserting this development into the sum for Q(
u
(m)
,
u
0
) then gives
Q
(
u
(
m
)
,
u
0
)
¼
X
J
Pr(
y
,
Jju
(
m
)
)
X
i
[
y
i
H
0
(
j
i
)]
2
2
s
0
2
log (2
p
)
2
þ
log
s
0
þ
þ
X
J
Pr(
y
,
Jju
(
m
)
) log Pr(
J
)
Pr(
y
,
j
i
¼ S
j
ju
(
m
)
)
¼
X
i
,
j
[
y
i
H
0
(
j
i
)]
2
2
s
0
2
X
log (2
p
)
2
þN
log
s
0
þ
Pr(
y
,
j
i
¼ S
j
ju
(
m
)
)
i
,
j
þ
X
j
Pr(
y
,
Jju
(
m
)
) log Pr(
J
)
which is seen to expose the marginal evaluations Pr(
y
,
j
i
¼ S
j
ju
(
m
)
) with respect to
J
.
Search WWH ::
Custom Search