Digital Signal Processing Reference
In-Depth Information
Symbols
x
1
x
2
x
3
x
4
x
5
Probabilities
0.1
0.2
0.1
0.4
0.2
Intervals
[0-0.1[ [0.1-0.3[ [0.3-0.4[ [0.4-0.8[ [0.8-1[
Tab l e 4 . 5 .
Probabilities associated with five events
{X
(
n
)=
x
i
}
and coding interval definition
- next, “include” the interval [0
.
1
−
0
.
3[ specified for the symbol
x
2
in the interval
[0
.
3
0
.
33[ ;
- continue with
x
4
to find the limits of the new interval:
−
0
.
4[ to give the interval [0
.
31
−
0
.
31 + 0
.
4
×
(0
.
33
−
0
.
31) = 0
.
318
0
.
31 + 0
.
8
×
(0
.
33
−
0
.
31) = 0
.
326
Encoding [
x
3
,x
2
,x
4
] amounts to transmitting whatever real number belonging to the
interval [0
.
318
0
.
326[, for example 0.32, or more exactly the binary representation
which is the most economical possible, for example {0101001} since 2
−
2
+2
−
4
+2
−
7
= 0.3203.
−
At the decoder, the following processing is carried out:
- since 0.3203 belongs to the interval [0
.
3
−
0
.
4[, we can determine
x
3
;
- since:
0
.
3203
−
0
.
3
=0
.
202
∈
[0
.
1
−
0
.
3[
0
.
4
−
0
.
3
we determine
x
2
;
-etc.
For further details, especially on how the most economical binary representations
are obtained in practice, the reader is referred to the article [HOW 94].
4.3. Noiseless coding of a discrete source with memory
The presentation above examined the most elementary case. In general, a statistical
dependence exists between the successive samples from the source which can be
exploited to reduce the number of bits necessary to represent the source exactly.