Digital Signal Processing Reference
In-Depth Information
evaluated for the received vector
y
. Note that we drop the term Pr(
y
) in the final line
since it does not vary with our hypothesis for
d
. The computational complexity of cal-
culating these marginals would appear to be
O
(2
N
), but given that
y
is obtained from
d
via a convolution, the complexity will decrease to a number linear in
n
if we use trellis
decoding, to be illustrated in Section 3.5. As it turns out, the complexity reduction will
be successful provided the
a priori
probability mass function Pr(
d
) factors into the
product of its marginals
Pr(
d
)
¼
Y
N
Pr(
d
i
)
:
i¼
1
This factorization is, strictly speaking, incorrect, since the variables
d
i
are derived by
interleaving the output from an error correction code, which imparts redundancy
among its elements for error control purposes. This factorization, by contrast, treats
the
d
i
as
a priori
independent variables. Nonetheless, if we turn a blind eye to this
shortcoming in the name of efficiency, the term Pr(
d
) will contribute a factor
Pr(
d
i
) to each term of the sum in (3.2), so that we may rewrite the
a posteriori
probabilities as
Pr(
d
i
jy
)
/
Pr(
d
i
)
X
d
j
,
j
=
i
Pr(
yjd
)
Y
l
=
i
Pr(
d
l
)
|
{z
}
extrinsic probability
,
i ¼
1, 2,
...
,
N:
This extrinsic probability for symbol
i
is so named because it is seen to depend on
symbols other than
d
i
[although
d
i
still enters in via the likelihood function
Pr(
yjd
)], in contrast to the first term Pr(
d
i
) which depends only on
d
i
. As this variable
will appear frequently, we denote it as
T
i
(
d
i
)
4
X
d
j
,
j
=
i
Pr(
yjd
)
Y
l
=
i
Pr(
d
l
),
i ¼
1, 2,
...
,
N
where the scale factor
d
is chosen so that evaluations sum to one:
T
i
(
2
1)
þT
i
(1)
¼
1.
Observe that, due to the summing operation on the right-hand side, the function
T
i
(
.
)
behaves as a type of marginal probability, dependent on the sole bit
d
i
.
Now, the outer decoder aims to infer the bits contained in
c
from the symbols con-
tained in
d
, according to
Pr(
c
j
jd
)
¼
X
c
i
,
i
Pr(
cjd
)
j
=
X
/
Pr(
djc
)Pr(
c
)
:
c
i
,
i
=
j
If the information bits
c
1
,
...
,
c
K
are each equiprobable, then the
a priori
probability
function Pr(
c
)
¼
Pr(
c
1
,
...
,
c
K
,
c
Kþ
1
,
...
,
c
N
) behaves as a scaled indicator function
Search WWH ::
Custom Search