Digital Signal Processing Reference
In-Depth Information
n
E ˜
) =
c
(
n
E [ A
(
n
,
0
)
]
˜
c
(
1
) + (
1
α)
E [ A
(
n
,
j
+
1
)
] c T
(4.60)
j
=
0
At this point we can use the independence assumption on the input regressors. It is
then clear that
E
T
n
B n j
I L f
E [ A
(
n
,
j
+
1
)
]
=
α
(
x
(
i
)) ˜
x
(
i
)
=
,
(4.61)
x
i = j +
1
where in the first equality the order of the matrix products is the same as in ( 4.58 ).
The matrix B x =
E
T is given by
I L f
α
(
x
(
j
)) ˜
x
(
j
)
E f
1
/
2 C x μ
1
/
2
x T
B x = α
I L μ
,
with C x =
(
x
(
n
))
(
n
)
.
(4.62)
Therefore, we can write ( 4.60 )as:
n
E ˜
) =
B n j
B n + 1
x
c
(
n
c
˜
+ (
1
α)
c T .
(4.63)
1
x
j = 0
This is a matricial equation and we will assume that B x is diagonalizable. 9 We have
then the following lemma:
E ˜
) <
Lemma 4.2
Equation ( 4.63 ) will be stable (i.e., lim n →∞
c
(
n
), for
L. 10
every choice of
˜
c
(
1
)
and c T if and only if eig i [ B x ]
<
1
,
i
=
1
,...,
Proof As B x is diagonalizable, it exists an invertible matrix P such that B x
=
P 1 , where
P
=
diag
1 ,...,λ L )
, and
λ i denotes the eigenvalues of B x .Itis
n P 1 . Now, we can write equation ( 4.63 )as:
easy to show that B x =
P
n
E ˜
) =
n
+
1 P 1
n
j P 1 c T
c
(
n
P
c
˜
(
1
) + (
1
α)
P
j
=
0
n
n
+
1 P 1
n
j
P 1 c T
=
P
c
˜
(
1
) + (
1
α)
P
0
j
=
n
j = 0
n
+
1 P 1
j
P 1 c T .
=
P
c
˜
(
1
) + (
1
α)
P
(4.64)
9 If B x is not diagonalizable, we can always find a Jordan decomposition [ 35 ] for it, and the result
from Lemma 4.2 is still valid [ 24 ].
10 We use e ig i [ A ] to denote the i -th eigenvalue of matrix A .
Search WWH ::




Custom Search