Digital Signal Processing Reference
In-Depth Information
Compressed
y (Output)
y (Output)
x
(Input)
x
(Input)
x
(Input)
(a)
(b)
(c)
Fig. 3.29
Nonuniform quantization: compression followed by uniform quantization
Since the mean is zero, the quantization noise power is equal to the variance, and is
given by:
r 2 q ¼ Z
D = 2
e 2 p ð e Þ de ¼ Z
D = 2
e 2 ð 1 = D Þ de ¼ D 2
12
ð 3 : 54 Þ
D = 2
D = 2
which is independent of the number of bits M.
3.6.1.2 Nonuniform Quantization
For some applications (such as speech communications), uniform quantization is
not efficient. In voice communication channels, very low speech amplitudes are
common, while large amplitudes are rare. From Eq. 3.54 above, the quantization
noise power r q is constant for constant step-size D ; hence, the quantization signal-
to-noise ratio SNR q is worse for low amplitudes than for large amplitudes. In
speech communications, therefore, smaller quantization steps should be used for
weaker amplitudes, and larger steps for larger amplitudes. This is the basis of
nonuniform quantization. It can be achieved by first distorting the original signal
using a logarithmic compression transfer function as shown in Fig. 3.29 a, then
using a uniform quantizer with a linear transfer function as shown in Fig. 3.29 b.
The final transfer function is shown in Fig. 3.29 c. The compression curve has
steeper slope for weak amplitudes, hence, it magnifies them more than large ones.
The compression law currently used in North America for speech is given by:
y = y max ¼ ln ð 1 þ l j x j= x max Þ
ln ð 1 þ l Þ
sgn ð x Þ;
where l is a parameter whose standard value = 225. The sgn function is used to
handle negative inputs. In the receiver, inverse compression, also called expansion,
is performed to counteract the distortion of data by compression.
Search WWH ::




Custom Search