Digital Signal Processing Reference
In-Depth Information
Algorithm:
(
gradient ascent kurtosis maximization
)Choose
η>
0
and
w
(0)
S
n−1
. Then iterate
∈
Δ
w
(
t
) := sgn(kurt(
w
(
t
)
z
))
E
(
z
(
w
(
t
)
z
)
3
)
v
(
t
+1) :=
w
(
t
)+
η
Δ
w
(
t
)
v
(
t
+1)
w
(
t
+1) :=
.
|
v
(
t
+1)
|
The third equation is needed in order for the algorithm to stay on
the sphere
S
n−1
.
Fixed-point kurtosis maximization
The above local kurtosis maximization algorithm can be considerably
improved by introducing the following fixed-point algorithm:
First, note that a continuously differentiable function
f
on
S
n−1
is
extremal at
w
if its gradient
∇
f
(
w
) is proportional to
w
at this point.
That is,
w
∝∇
f
(
w
)
So here, using equation (4.5), we get
f
(
w
)=
E
((
w
z
)
3
z
)
2
w
.
w
∝∇
−
3
|
w
|
S
n−1
.
Algorithm:
(
fixed-point kurtosis maximization
)Choose
w
(0)
∈
Then iterate
v
(
t
+1) :=
E
((
w
(
t
)
z
)
3
z
)
−
3
w
(
t
)
v
(
t
+1)
w
(
t
+1) :=
.
|
v
(
t
+1)
|
The above iterative procedure has the separation vectors as fixed
points. The advantage of using such a fixed-point algorithm lies in the
facts that the convergence speed is greatly enhanced (cubic convergence
in contrast to quadratic convergence of the gradient-ascent algorithm)
and that other than the starting vector, the algorithm is parameter-free.
For more details, refer to [124] [120].
Generalizations
Using kurtosis to measure non-Gaussianity can be problematic for non-
Gaussian sources with very small or even vanishing kurtosis. In general it
Search WWH ::
Custom Search