Digital Signal Processing Reference
In-Depth Information
Using Bayes's rule, we may develop Pr( cjy )as 1
Pr( cjy ) ¼ Pr( c ) Pr( y j c )
Pr( y )
in which the following applies.
The denominator term Pr( y ) is the joint probability distribution for the channel
outputs y ¼ ( y 1 , ... , y M ), constituting the evidence , evaluated for the received
data. As it contributes the same factor to each evaluation of Pr( cjy ) (interpreted
as a function of c for fixed y 1 , ... , y M ), it may be omitted with no effect on either
the wordwise or bitwise estimator.
The numerator term Pr( yjc ) is the channel likelihood function, taking as many
evaluations as there are configurations of c (namely, 2 K ); the vector y is fixed
at the received channel output values.
The leading term Pr( c ) is the a priori probability mass function for configuration
c . If this probability is uniform, that is, Pr( c ) ¼ 1 / 2 K for each configuration of c ,
then maximizing the a posteriori probability Pr( cjy ) versus c reduces to maxi-
mizing the likelihood function Pr( yjc ) versus c .
3.4 TURBO DECODER: OVERVIEW
By interpreting the channel as a 'parasitic' convolutional encoder, the communication
chain assumes a serially concatenated structure, allowing direct application of the
turbo decoding algorithm. In the absence of an interleaver, the cascade of the encoder
and channel would appear as a convolution, and the Viterbi algorithm could be applied
to perform optimal decoding. An interleaver is usually included, however, in order to
break up bursty error patterns. It also destroys the convolutional structure of the overall
code. Iterative decoding splits the task into two local operations, comprising an equal-
izer which estimates the channel input given its output, and the the decoder which esti-
mates the code word from the channel input. The 'turbo effect' is triggered by coupling
the information between these two local operators.
The equalizer aims to calculate the marginal probabilities
Pr( d i jy ) ¼ X
d j , j
Pr( djy )
i
=
¼ X
d j , j = i
Pr( d ) Pr( y j d )
Pr( y )
(3 : 2)
X
/
Pr( d )Pr( yjd ),
i ¼ 1, 2, ... , N ,
d j , j = i
1 For notational simplicity, we use Pr( . ) to denote a probability mass function if the underlying distribution is
discrete (like c ), or a probability density function if the underlying distribution is continuous (like y ).
 
Search WWH ::




Custom Search