Digital Signal Processing Reference
In-Depth Information
where
g
i
(
·
)
is a nonlinear function. In this case, the nonlinearity is included in
the projection onto the basis vectors
w
i
, but there are other possibilities [148].
In matrix notation, (6.54) can be expressed as
W
T
g
2
J
NPCA
(
W
)
=
E
{
x
−
(
Wx
)
}
(6.55)
where
g
.
If
x
has been prewhitened, the separating matrix
W
will be orthogonal
and (6.55) reduces to
(
·
)
=[
g
1
(
·
)...
g
N
(
·
)
]
N
2
J
NPCA
(
W
)
=
E
{[
y
i
−
g
i
(
y
i
)
]
}
(6.56)
i
=
1
It is interesting to notice that (6.56) is very similar to the Bussgang algo-
rithms [213] discussed in Section 4.3.
The cost function defined in (6.55) can be minimized by any optimization
method. However, the original proposal employs a recursive least squares
(RLS, vide Section 3.5.1) approach [222]. The NPCA algorithm employing
this approach is given in Algorithm 6.1.
Algorithm 6.1
: Nonlinear PCA
1. Randomly initialize
W
;
2. While a stopping criterion is not met, do:
(
0
)
and
P
(
0
)
z
(
n
)
=
g
(
W
(
n
−
1
)
¯
x
(
n
))
(6.57)
h
(
n
)
=
P
(
n
−
1
)
z
(
n
)
(6.58)
z
T
m
(
n
)
=
h
(
n
)/(
λ
+
(
n
)
h
(
n
))
(6.59)
λ
−
1
T
P
(
n
)
=
ϒ
[
P
(
n
−
1
)
−
m
(
n
)
h
(
n
)
]
(6.60)
T
z
e
(
n
)
=
z
(
n
)
−
W
(
n
−
1
)
(
n
)
(6.61)
T
W
(
n
)
=
W
(
n
−
1
)
+
m
(
n
)
e
(
n
)
(6.62)
where
denotes an operator that generates a new symmetric
matrix with the same upper-triangular portion of
J
,
ϒ
[
J
]
denotes the
whitened data and λ is the forgetting factor of RLS algorithm.
x
¯
(
n
)