Cryptography Reference
In-Depth Information
capacity thus obtained will not be given here. We merely note that this capacity
is higher than that of the binary symmetric channel that is deduced from it by
taking a hard decision, that is, restricted to the binary symbol Y = x ,bya
factor that increases when the signal to noise ratio of the channel decreases. It
reaches π/ 2 when we make this ratio tend towards 0, if the noise is Gaussian.
For a given signal to noise ratio, the binary input continuous output channel is
therefore better than the binary symmetric channel that can be deduced from
it by taking hard decisions. This channel is also simpler than the hard decision
channel, since it does not have any means to take a binary decision according
to the received real value. Taking a hard decision means losing the information
carried by the individual variations of this value, which explains that the capacity
of the soft output channel is higher.
3.2.2 Capacity of a transmission channel
Here we will consider the most general case where the input and the output of
the channel are no longer only scalar values but can be vectors whose dimension
N is a function of the modulation system. For example, we will have N =1
for an amplitude modulation and N =2 for a phase modulation with a 4-point
constellation. X and Y are therefore replaced by X and Y .
Capacity was introduced in Section 3.1 for a discrete input and output chan-
nel, and is defined as the maximum of the mutual information of its input and
output variables, with respect to all the possible probability distributions of the
variables. For any dimension of the signal space, the law remains:
C =max
p ( X )
I ( X ; Y )
(3.12)
where I ( X ; Y ) is the mutual information between X and Y . When the input
and the output of the channel are real values, and no longer discrete values, the
probabilities are replaced by probability densities and the sums in relation (3.4)
become integrals. For realizations x and y of the random variables X and Y ,
we can write the mutual information as a function of the probabilities of x and
y :
+
+
p ( y
x )
p ( y )
|
I ( X ; Y )=
···
p ( x ) p ( y
|
x )log 2
d x d y
(3.13)
−∞
−∞
2 N times
To determine C , we therefore have to maximize (3.13) which is valid for all types
of inputs (continuous, discrete) of any dimension N . In addition, the maximum
is reached for equiprobable inputs (see Section 3.1), for which we have:
M
p ( y )= 1
M
p ( y
|
x i )
i =1
Search WWH ::




Custom Search