Digital Signal Processing Reference
In-Depth Information
where
E
u
i
u
i
g
2
[
u
i
]
,
Q
=
(1.25)
and the notation vec
−
1
F
k
(
σ)
recovers the weighting matrix that corresponds
to the vector
F
k
σ
.
=
W
When
I
, the evolution of the top entry of
i
in Equation 1.22 describes
2
. If, on the other hand,
the mean-square deviation of the filter, i.e., E
w
i
is chosen as
i
describes the excess
mean-square error (or learning curve) of the filter, i.e., E
=
R
u
, the evolution of the top entry of
W
.
The learning curve can also be characterized more explicitly as follows. Let
w
i
2
R
u
=
E
e
a
(
i
)
r
=
vec
(
R
u
)
and choose
σ
=
r
. Iterating Equation 1.21 we find that
E
,
2
(
u
i
I
+
F
+···+
F
i
)
r
2
r
2
F
i
+
1
r
2
2
v
E
w
i
=
w
+
µ
σ
−
1
g
2
[
u
i
]
that is,
2
r
2
2
2
v
E
w
i
=
w
a
i
+
µ
σ
b
(
i
)
,
−
1
where the vector
a
i
and the scalar
b
(
i
)
satisfy the recursions
a
i
=
Fa
i
−
1
,
a
=
r,
−
1
E
,
a
i
−
1
g
2
[
u
i
]
u
i
b
(
i
)
=
b
(
i
−
1
)
+
b
(
−
1
)
=
0
.
o
. Using the definitions for
Usually
w
=
0 so that
w
=
w
{
a
i
,b
(
i
)
}
,itis
−
1
−
1
easy to verify that
E
e
a
(
E
e
a
(
o
2
F
i
−
1
2
2
v
Q
vec
−
1
F
i
+
1
r
i
)
=
i
−
1
)
+
w
+
µ
σ
Tr
(
(
))
,
(1.26)
(
F
−
I
)
r
which describes the learning curve of data-normalized adaptive filters as in
Equation 1.3. Further discussions on the learning behavior of adaptive filters
can be found in Reference 17.
1.6
Mean-Square Stability
Recursion 1.22 shows that the adaptive filter will be mean-square stable if,
and only if, the matrix
F
is a stable matrix; i.e., all its eigenvalues lie inside the
unit circle. But since
has the form of a companion matrix, its eigenvalues
coincide with the roots of
p
F
, which in turn coincide with the eigenvalues
of
F
. Therefore, the mean-square stability of the adaptive filter requires the
matrix
F
in Equation 1.19 to be a stable matrix.
(
x
)
Search WWH ::
Custom Search