Digital Signal Processing Reference
In-Depth Information
plot in the figure shows the channel input power per symbol as a function of the
symbol error probability, which is specified to be identical for all subchannels
(i.e.,
P e ( k ) is identical for all k ). The plot shows the minimum power P min of
the optimized system as well as the power P brute for the system with no bit
allocation ( b k = b for all k ), both divided by M to get the per-symbol value. In
this example the coding gain is
P brute
P min
=19 . 17 ,
which corresponds to 12 . 8 dB for any symbol error probability. Thus the sepa-
ration between the two curves in the bottom plot (which has a log-log axis) is
identical for all symbol error probabilities.
Example 17.4: Channel with deeper nulls
In this example all quantities are as in the preceding example, but the channel
is different. We take C ( z )= C 0 ( ze −jω 0 ) , where ω 0 =6 π/M as before, and
C 0 ( z )=0 . 1734 + 0 . 1634 z 1 +0 . 1664 z 2 +0 . 1651 z 3 +0 . 1621 z 4 +0 . 1696 z 5 .
Figure 17.24 shows the channel response and the required channel input power
as a function of error probability. In this example the coding gain is
P brute
P min
= 127 ,
which corresponds to 21 dB. The higher coding gain is a consequence of the
fact that the channel nulls in this example are deeper than in Example 17.3.
As a result some of the DFT coe cients C [ k ] have very small values, and the
set of numbers 1 /|C [ k ] | 2 have large variation. This makes their arithmetic to
geometric mean ratio ( AM/GM ) quite large. Recall here that the coding gain
due to bit allocation is precisely the AM/GM ratio (Sec. 14.7):
M− 1
1
M
1
|
C [ k ]
| 2
= P brute
P min
k =0
G
=
1 /M
M− 1
1
|
C [ k ]
| 2
k =0
The aforementioned variation of C [ k ] can be seen qualitatively from the following
table. Indeed, for k = 11 the second channel has a much smaller value of C [ k ] ,
though for other values of k the two channels have comparable magnitudes.
Search WWH ::




Custom Search