Digital Signal Processing Reference
In-Depth Information
We should note that the capacity bound using the log-determinant formula above is
attainable using an input that is Gaussian distributed [44]. When using a binary input,
the attainable capacity is generally lower. We close this section by visiting the capacity
bound for the binary input case.
In the absence of intersymbol interference, the mutual information between chan-
nel input and channel output becomes, for the binary case
I ( d ; y ) ¼ Pr( d ¼þ 1) ð
Pr( yjd ¼þ 1) log Pr( y j d ¼þ 1)
Pr( y )
dy
y
þ Pr( d ¼ 1) ð
y
Pr( yjd ¼ 1) log Pr( y j d ¼ 1)
Pr( y )
dy
using for the output probability
Pr( y ) ¼ Pr( yjd ¼þ 1) Pr( d ¼þ 1) þ Pr( yjd ¼ 1) Pr( d ¼ 1)
in which the conditional probability Pr( yjd ) is Gaussian
( y + 1) 2
2 s 2
1
Pr( yjd ¼+ 1) ¼
2 p s exp
:
Maximizing the mutual information I ( d ; y ) versus the input probabilities Pr(d ¼+ 1)
gives a uniform distribution on the input: Pr( d ¼þ 1) ¼ Pr( d ¼ 1) ¼
1
2 . The
resulting channel capacity (per channel use) may then be plotted against the SNR
10 log 10 (1 =s 2 ) in dB, as in Figure 3.12, which shows also the capacity bound
Figure 3.12 Channel capacity per channel use for Gaussian and binary inputs, in the
absence of intersymbol interference.
Search WWH ::




Custom Search