Digital Signal Processing Reference
In-Depth Information
E e b , i (
) =
n
)
x
(
n
k
00
k
i
1
.
(2.38)
Then, it is easy to show that, at each time n , the sequence of backward prediction errors
of increasing order
will be decorrelated. This means that the autocorrelation
matrix of the backward prediction errors is diagonal. More precisely,
E e b (
{
e b , i (
n
) }
e b (
n
)
n
)
=
diag
{
P b , i }
0
i
L
1
.
(2.39)
Another way to get to this result comes from using ( 2.37 ) to write
E e b (
e b (
T b R x T b .
n
)
n
)
=
(2.40)
By definition, this is a symmetric matrix. From ( 2.36 ), it is easy to show that R x T b is
a lower triangular matrix with P b , i being the elements on its main diagonal. However,
since T b is also a lower triangular matrix, the product of both matrices must retain
the same structure. But it has to be also symmetric, and hence, it must be diagonal.
Moreover, since the determinant of T b is 1, it is a nonsingular matrix. Therefore,
from ( 2.39 ) and ( 2.40 ) we can put
diag
P b , i } 1 / 2 T b T diag
R 1
x
T b diag
P b , i } 1 T b =
P b , i } 1 / 2 T b .
=
{
{
{
(2.41)
This is called the Cholesky decomposition of the inverse of the autocorrelationmatrix.
Notice that the inverse of the autocorrelation matrix is factorized into the product
of an upper and lower triangular matrices that are related to each other through a
transposition operation. Thesematrices are completely determined by the coefficients
of the backward prediction error filter and the backward prediction error powers.
2.6 Final Remarks on Linear Prediction
It should be noticed that a sufficiently long (high order) forward prediction error filter
transforms a (possibly) correlated signal into a white sequence of forward errors (the
sequence progresses with time index n ). On the other hand, the Gram-Schmidt
orthogonalization transforms the input vector x
,
where its components (associated to the order of the backward prediction error filter)
are uncorrelated.
By comparing the results shown for forward and backward predictions, it can be
seen that: i) the forward and backward prediction error powers are the same. ii) the
coefficients of the optimum backward filter can be obtained by reversing the ones of
the optimum forward filter. Based on these relations, the Levinson-Durbin algorithm
(
n
)
into an equivalent vector e b (
n
)
Search WWH ::




Custom Search