Digital Signal Processing Reference
In-Depth Information
By computing the instantaneous gradient terms of (
2.17
) with respect to
S
D
(i)
,
¯
w
(i)
, and
ε
2
(i)
, we obtain
x
∗
(i)
r
(i)
2
λ
∗
a
p
(θ
k
)
w
H
(i)
ε
2
(i)
S
D
(i)
w
H
(i)
w
H
(i),
∇
L
MV
S
∗
D
(i)
=¯
¯
+
w
(i)
¯
¯
+
¯
∇
L
MV
w
∗
(i)
=
x
∗
(i)
S
D
(i)
r
(i)
+
ε
2
(i)
S
D
(i)
S
D
(i)
¯
2
λ
∗
S
D
(i)
a
p
(θ
k
),
w
(i)
+
2
ε(i)
w
H
(i)
S
D
(i)
S
D
(i)
∇
L
MV
ε
2
(i)
=
w
(i).
¯
(2.20)
By introducing the positive step sizes
μ
s
,
μ
w
, and
μ
ε
, using the gradient rules
S
D
(i
+
1
)
=
S
D
(i)
−
μ
s
∇
L
MV
S
∗
D
(i)
,
w
(i
¯
+
1
)
= ¯
w
(i)
−
μ
w
∇
L
MV
w
∗
(i)
and
ε(i
μ
w
∇
L
MVε(i)
, enforcing the constraint and solving the resulting
equations, we obtain
+
1
)
=
ε(i)
−
μ
s
¯
x
∗
(i)
r
(i)
w
H
(i)
w
H
(i)
S
D
(i
+
1
)
=
S
D
(i)
−
¯
+
ε(i)
S
D
(i)
w
(i)
¯
¯
−
a
p
(θ
k
)
a
p
(θ
k
)
−
1
a
p
(θ
k
)
¯
w
H
(i)
x
∗
(i)
a
p
(θ
k
)
r
(i)
+
ε(i)
,
(2.21)
μ
w
¯
x
∗
(i)
S
D
(i)
r
(i)
ε(i)
S
D
(i)
S
D
(i)
¯
+
= ¯
−
+
¯
w
(i
1
)
w
(i)
w
(i)
+
a
p
(θ
k
)
a
p
(θ
k
)
−
1
¯
x
∗
(i)
r
H
(i)
S
D
(i)
S
D
(i)
a
p
(θ
k
)
ε(i)
w
H
(i)
S
D
(i)
S
D
(i)
S
D
(i)
a
p
(θ
k
)
,
+
(2.22)
w
H
(i)
S
D
(i)
S
D
(i)
ε(i
+
1
)
=
ε(i)
−
μ
ε
¯
w
(i),
¯
(2.23)
w
H
(i)
S
D
(i)
r
(i)
. The RJIO scheme trades-off a full-rank beamformer
against one rank-reduction matrix
S
D
(i)
, one reduced-rank beamformer
where
x(i)
= ¯
w
(i)
, and
one adaptive loading recursion operating in an alternating fashion and exchanging
information.
¯
2.5.2 Recursive Least-Squares Algorithms
Here, an RLS algorithm is devised for an efficient implementation of the RJIO
method. To this end, let us first consider the Lagrangian
i
α
i
−
l
¯
w
H
(i)
S
D
(i)
r
(l)
L
LS
S
D
(i),
w
(i),ε(i)
=
2
¯
l
=
1
ε
2
(i)
w
H
(i)
S
D
(i)
S
D
(i)
+
¯
¯
w
(i)
+
λ
¯
1
,
w
H
(i)
S
D
(i)
a
p
(θ
k
)
−
(2.24)
where
α
is the forgetting factor chosen as a positive constant close to, but less than 1.
Search WWH ::
Custom Search