Digital Signal Processing Reference
In-Depth Information
Q
T
R
x
=
Q
,
(2.22)
with
λ
0
,λ
1
,...,λ
L
−
1
of
R
x
, and
Q
a (unitary) matrix that has the associated eigenvectors
q
0
,
being a diagonal matrix determined by the eigenvalues
q
L
−
1
as its columns [
2
]. Lets define the
misalignment vector
(or weight error vector)
q
1
,...,
w
˜
=
w
opt
−
w
,
(2.23)
and its transformed version
Q
T
u
=
w
˜
.
(2.24)
Using (
2.20
), (
2.16
), (
2.23
), (
2.22
), and (
2.24
)in(
2.19
), results in
u
T
J
MSE
(
w
)
=
J
MMSE
+
u
.
(2.25)
This is called the
canonical form
of the quadratic form
J
MSE
(
and it contains no
cross-product terms. Since the eigenvalues are non-negative, it is clear that the surface
describes an elliptic hyperparaboloid, with the eigenvectors being the principal axes
of the hyperellipses of constant MSE value.
w
)
2.5 Example: Linear Prediction
In the filtering problem studied in this chapter, we use the
L
-most recent samples
x
and estimate the value of the reference signal at
time
n
. The idea behind a
forward linear prediction
is to use a certain set of samples
x
(
n
),
x
(
n
−
1
),...,
x
(
n
−
L
+
1
)
(
n
−
1
),
x
(
n
−
2
),...
to estimate (with a linear combination) the value
x
(
n
+
k
)
for
k
0. On the other hand, in a
backward linear prediction
(also known as
smoothing
)
the set of samples
x
≥
(
n
),
x
(
n
−
1
),...,
x
(
n
−
M
+
1
)
is used to linearly estimate the
value
x
(
n
−
k
)
for
k
≥
M
.
2.5.1 Forward Linear Prediction
Firstly, we explore the forward prediction case of estimating
x
(
n
)
based on the
T
,using
previous
L
samples. Since
x
(
n
−
1
)
=[
x
(
n
−
1
),
x
(
n
−
2
),...,
x
(
n
−
L
)
]
a transversal filter
w
the forward linear prediction error can be put as
L
w
T
x
e
f
,
L
(
n
)
=
x
(
n
)
−
w
j
x
(
n
−
j
)
=
x
(
n
)
−
(
n
−
1
).
(2.26)
j
=
1