Digital Signal Processing Reference
In-Depth Information
Differentiating (4.47) we get to
E
E
y
2
E
y
∗
(
)
∂
J
SW
c
∂
2
y
x
∗
(
x
∗
(
w
i
=
4sgn
(
c
4
(
s
(
n
)))
|
y
(
n
)
|
(
n
)
n
−
i
)
−
(
n
)
n
)
n
−
i
γ
1
E
2
E
y
)
)
+
γ
2
E
y
x
∗
(
x
∗
(
+
|
y
(
n
)
|
(
n
)
n
−
i
(
n
)
n
−
i
(4.50)
2
in (4.50). The original proposal in [269] uses an empirical average for those
expectations and, for the correlations, employs a stochastic approximation.
This leads to the following adaptation procedure:
The problem is how to estimate the expectations
E
y
2
)
and
E
|
(
n
y
(
n
)
|
μ
y
E
|
4
s
(
n
)
|
2
x
∗
(
E
|
2
w
(
n
+
1
)
=
w
(
n
)
|
y
(
n
)
|
−
(
n
)
n
)
s
(
n
)
|
γ
2
y
x
∗
(
γ
1
|
)
−
y
2
2
y
∗
(
y
2
|
y
(
n
)
|
+
(
n
)
|+
(
n
(
n
)
n
)
n
)
(4.51a)
y
2
μ
1
y
2
y
2
(
n
)
=
(
1
−
μ
1
)
(
n
−
1
)
+
(
n
)
(4.51b)
|
2
y
2
(
n
)
|=
(
1
−
μ
2
)
|
y
2
(
n
−
1
)
|+
μ
2
|
y
(
n
)
|
(4.51c)
where μ
1
and μ
2
are the step sizes for the estimation of
E
y
2
)
and
(
n
E
|
2
, respectively.
It is worth noting that if we consider
E
y
(
n
)
|
2
{
s
(
n
)
}=
0and
c
4
(
s
(
n
)) <
0, which
is the case for digital modulation signals, we have
4
E
|
s
(
n
)
|
α
=−
(4.52)
c
4
(
s
(
n
))
and the update rule in (4.51) becomes
μ
y
4
E
{
s
(
n
)
}
2
w
(
n
+
1
)
=
w
(
n
)
−
|
y
(
n
)
|
−
(
n
)
x
(
n
)
(4.53)
2
E
{
s
(
n
)
}
which is the Godard/CMA algorithm [124,292]. More about the relationships
between blind equalization algorithms will be said in
Section 4.7
.
4.5 The Super-Exponential Algorithm
Shalvi and Weinstein [270] proposed the SEA as an alternative to accelerate
the convergence of the techniques discussed in
Section 4.4
.
The algorithm