Databases Reference
In-Depth Information
1, we add an amount
proportional to the magnitude of the derivative with a sign that is opposite to that of the
derivative of d n at time n :
At any given time, in order to adapt the coefficient at time n
+
d n
α
a ( n + 1 )
1
a ( n )
1
=
(41)
a 1
where
α
is some positive proportionality constant.
d n
a 1 =−
2
(
x n
a 1 ˆ
x n 1 ) ˆ
x n 1
(42)
=−
2 d n ˆ
x n 1 .
(43)
Substituting this into ( 41 ), we get
a ( n + 1 )
1
a ( n )
1
=
+ α
d n ˆ
x n 1
(44)
α
where we have absorbed the 2 into
. The residual value d n is available only to the encoder.
Therefore, in order for both the encoder and decoder to use the same algorithm, we replace d n
by d n in ( 44 ) to obtain
a ( n + 1 )
1
a ( n )
1
+ α d n ˆ
x n 1 (45)
Extending this adaptation equation for a first-order predictor to an N th-order predictor is
relatively easy. The equation for the squared prediction error is given by
=
x n
2
N
d n
=
a i ˆ
x n i
(46)
i
=
1
Taking the derivative with respect to a j will give us the adaptation equation for the j th predictor
coefficient:
a ( n + 1 )
j
a ( n )
j
+ α d n ˆ
=
x n j
(47)
We can combine all N equations in vector form to get
A ( n + 1 ) =
A ( n ) + α d n X n 1
(48)
where
x n
ˆ
x n 1
.
ˆ
X n =
(49)
x n N + 1
ˆ
This particular adaptation algorithm is called the least mean squared (LMS) algorithm [ 175 ].
11.6 Delta Modulation
A very simple form of DPCM that has been widely used in a number of speech-coding appli-
cations is the delta modulator (DM). The DM can be viewed as a DPCM system with a 1-bit
 
Search WWH ::




Custom Search