Digital Signal Processing Reference
In-Depth Information
where
F
is
M
2
M
2
×
and given by
2
B,
F
=
I
−
µ
A
+
µ
(1.19)
in terms of the symmetric matrices
{
A, B
}
,
A
=
(
P
⊗
I
M
)
+
(
I
M
⊗
P
)
,
E
u
i
u
i
,
u
i
u
i
g
2
[
u
i
]
⊗
B
=
(1.20)
E
u
i
u
i
g
[
u
i
]
P
=
.
.
Actually,
A
is positive-definite (because
P
is) and
B
is nonnegative-definite.
Using the column notation
σ
=
σ
, and the relation
F
σ
, we can write
Equations 1.16 through 1.17 as
E
,
2
σ
g
2
[
u
i
]
u
i
2
vec
−
1
2
vec
−
1
2
2
v
E
w
i
(σ )
=
E
w
i
−
1
σ)
+
µ
σ
(
F
which we shall rewrite more succinctly, by dropping the vec
−
1
(
·
)
notation and
keeping the weighting vectors, as
E
2
σ
g
2
[
u
i
]
u
i
2
σ
=
2
F
2
2
v
E
w
i
E
w
i
−
1
σ
+
µ
σ
.
(1.21)
Now, as mentioned earlier, in transient analysis we are interested in the
evolution of E
2
R
u
; the former quantity is the filter mean-
square deviation while the second quantity relates to the filter mean-square
error (or learning) curve because
2
w
i
and E
w
i
E
e
2
E
e
a
(
2
v
=
2
R
u
+
σ
2
v
.
(
i
)
=
i
)
+
σ
E
w
i
−
1
2
,
E
2
R
u
}
2
The quantities
{
E
w
i
w
i
are in turn special cases of E
w
i
obtained
by choosing
=
I
or
=
R
u
. Therefore, in the sequel, we focus on studying
2
the evolution of E
.
From Equation 1.21 we see that to evaluate E
w
i
for arbitrary
2
σ
F
w
i
, we need E
w
i
with
σ
weighting vector
F
σ
. This term can be deduced from Equation 1.21 by writing
it for
σ
←
F
σ
, i.e.,
E
,
u
i
F
2
F
2
F
2
2
2
v
σ
g
2
[
u
i
]
E
w
i
σ
=
E
w
i
−
1
σ
+
µ
σ
2
F
2
with the weighted term E
w
i
. This term can in turn be deduced from
σ
F
2
Equation 1.21 by writing it for
σ
←
σ
. Continuing in this fashion, for
Search WWH ::
Custom Search