Digital Signal Processing Reference
In-Depth Information
The solution of this least-squares problem is expressed in the following way:
a
ˆ
⎡ ⎤
⎢ ⎥
1
( )
1
H
H
a
ˆ
=−
XXX
x
with
ˆ
a
= ⎣ ⎦
[6.7]
a
ˆ
p
Depending on the chosen minimization window the matrices and the vectors X
and x are defined in different ways. In the case where:
(
)
( )
()
xp
1
x
0
xp
XX
==
xx
==
[6.8]
1
1
(
)
(
)
(
)
xN
2
xN p
− −
1
xN
1
this estimation method is improperly called the covariance method because the
matrix
Xx are estimations of covariances with one
normalizing coefficient. Morf has given an order recursive algorithm making it
possible to calculate this solution without explicit inversion of the matrix [KAY 88,
MOR 77].
XX and the vector
11
11
By adopting a least-squares solution, which minimizes the forward [6.6] and
backward sum of the prediction errors, we choose:
(
)
(
)
xNp
*
xN
*
− ⎤
1
()
()
xp
*
x
* 1
XX
==
[6.9]
2
(
)
( )
xp
1
x
0
(
)
(
)
xN
2
xN p
− −
1
(
)
xNp
*
−−⎤
1
()
()
x
xp
*0
xx
==
2
(
)
xN
1
This is the modified covariance method and it is generally more efficient than the
covariance method. Sometimes it is also called the maximum entropy method
Search WWH ::




Custom Search