Game Development Reference
In-Depth Information
s
D PCM (
R
)
˃
G DPCM =
) =
p ,
(4.16)
D DPCM (
R
˃
s
p are variances of errors for PCM and DPCM respectively with the
where ˃
and ˃
bit-rate R.
Based on the theory of quantization (Jayant and Noll 1984 ; Berger 1971 ), the
relationship between the variance and the bit-rate can be formulated as
2
2 2 2 R
2
˃
q = ʵ
˃
x ,
(4.17)
2
2
where ˃
x are the quantization error and the variance of the source respectively.
R represents the bit-rate.
q and ˃
2 is a constant variable depending on the characteristics
of the quantize, which is usually called the 'quantizer performance factor'. By intro-
ducing Eq. 4.17 into Eq. 4.16 we can get,
2
PCM
2
s
˃
ʵ
PCM ˃
G DPCM =
DPCM =
p ,
(4.18)
2
2
˃
ʵ
DPCM ˃
DPCM are the quantizer performance factor for the PCM and DPCM
coding system respectively. If the source is stable and its power spectrum density
function is S
PCM and
where
e j w )
2
(
, then the minimum ˃
p can be calculated as (Jayant and Noll 1984 ),
ˀ
ln S e j ˉ d ˉ
1
2 ˀ
.
p
p
=
˃
=
˃
lim
N
exp
(4.19)
,
min
ₒ∞
ˀ
The spectral flatness measure is defined as
p
s
ʳ = ˃
.
(4.20)
,
min
It is obvious that the gain of prediction coding is inversely proportional to the
spectral flatness measure. Intuitively, a signal is easier to predict if its spectrum is
sharper. Conversely, a signal is harder to predict if its spectrum is flatter. It can be
proved that, Eq. 4.19 can be rewritten as (Jayant and Noll 1984 )
N
2
p
˃
min =
lim
N
ʻ k
,
(4.21)
,
ₒ∞
k
where ʻ k is the k th eigenvalue of the N -order autocorrelation matrix for the signal.
On the other hand, because traces of similar matrices are identical, it is obvious that,
N
k
1
2
s
˃
=
lim
N
ʻ k .
(4.22)
ₒ∞
 
Search WWH ::




Custom Search