Cryptography Reference
In-Depth Information
x =( x 1 x 2 ...x n ) . The operation of the channel is described by the vector
addition modulo 2 of x and of an error vector e =( e 1 e 2 ...e n ) :
y = x
e
(3.1)
with y =( y 1 y 2 ...y n ) , the notation
now designating the modulo 2 addition of
two words, symbol to symbol. The hypothesis that the binary symmetric chan-
nel is memoryless means that the random variables e i ,i =1 ...n , are mutually
independent. The number of configurations of possible errors is 2 n ,andtheir
probability, for an error probability p of the given channel, depends only on the
weight w ( e ) of the configuration of errors e realized, defined as the number of 1
symbols that it contains. Thus, the probability of the appearance of a particular
configuration of errors of weight w ( e ) affecting a word of length n equals:
P e = p w ( e ) (1
p ) n−w ( e )
(3.2)
As p was assumed to be lower than 1 / 2 , probability P e is a decreasing function
of the weight w ( e ) ,whatever n .
The probability of the appearance of any configuration of errors of weight w
equals:
P w = n
w
p w (1
p ) n−w
(3.3)
The weight of the error configurations thus follows a Bernoulli distribution whose
mathematical expectation (or mean) is np and the variance (the expectation of
the square of the difference between its effective value and its mean) is np (1
p ) .
Mutual information and capacity of the binary symmetric channel
To characterize a channel, we first have to measure the quantity of information
that a symbol Y leaving a channel provides, on average, about the corresponding
symbol that enters, X . This value called mutual information and whose unit is
the Shannon (Sh), is defined for a discrete input and output channel by:
I ( X ; Y )=
X
=
X
Pr( X, Y )
Pr( X )Pr( Y )
(3.4)
In this expression, the sums are extended to all the discrete values that X and
Y can take in a given alphabet. Pr( X, Y ) denote the joint probability of X
and Y , Pr( X
Pr( X
Y )
Pr( X )
|
Pr( X, Y )log 2
Pr( X, Y )log 2
Y
Y
Y ) the probability of X conditionally to Y (that is, when Y is
given), Pr( X ) and Pr( Y ) are the marginal probabilities of X and Y (that is, of
each of the variables X and Y whatever the value taken by the other: Pr( X )=
|
Y Pr( X, Y ) and Pr( Y )= X Pr( X, Y ) ).
These different probabilities are
linked according to Bayes' law:
Pr( X, Y )=Pr( X
|
Y )Pr( Y )=Pr( Y
|
X )Pr( X )
(3.5)
Search WWH ::




Custom Search