Digital Signal Processing Reference
In-Depth Information
We can also compute the weight vector
w
adaptively using gradient descent updates
as discussed in Section 1.3.2
w
(
nþ
1)
¼ w
(
n
)
m
@
J
L
(
w
)
@w
(
n
)
¼ w
(
n
)
þmE
{
e
(
n
)
x
(
n
)}
or using stochastic gradient updates as in
w
(
nþ
1)
¼ w
(
n
)
þme
(
n
)
x
(
n
)
which leads to the popular least-mean-square (LMS) algorithm [113]. For both
updates,
m
.
0 is the stepsize that determines the trade-off between the rate of conver-
gence and the minimum error
J
L
(
w
opt
).
Widely Linear MSE Filter
A widely linear filter forms the estimate of
d
(
n
)
through the inner product
y
WL
(
n
)
¼ v
H
x
(
n
)
(1
:
37)
where the weight vector
v ¼
[
v
0
v
1
v
2
N
1
]
T
, that is, it has double dimension
compared to the linear filter and
x
(
n
)
x
(
n
)
x
(
n
)
¼
as defined in Table 1.2 and the MSE cost in this case is written as
2
}
:
J
WL
(
w
)
¼ E
{
jd
(
n
)
y
WL
(
n
)
j
As in the case for the linear filter, the minimum MSE optimal weight vector is the
solution of
@
J
WL
(
v
)
@v
¼
0
and results in the
widely linear
complex Wiener-Hopf equation given by
E
{
x
(
n
)
x
H
(
n
)}
v
opt
¼ E
{
d
(
n
)
x
(
n
)}
:
We can solve for the optimal weight vector as
C
1
p
v
opt
¼
Search WWH ::
Custom Search