Information Technology Reference
In-Depth Information
z 1 is the first principal component.
z N is the first minor component.
z 1 , z 2 ,
...
, z M ( M < N )are M principal components.
•span
is an M -dimensional principal subspace.
z N L + 1 , z N L + 2 , ... , z N ( L < N )are L minor components.
•span { z N L + 1 , z N L + 2 , ... , z N } is an L -dimensional minor subspace.
{
z 1 , z 2 ,
...
, z M }
The general neural architecture is given by a single layer of Q linear
neurons with inputs x ( t ) = [ x 1 ( t ) , x 2 ( t ) , ... , x N ( t ) ] T
and outputs y ( t ) =
y 1 ( t ) , y 2 ( t ) , ... , y Q ( t ) T :
N
T
j
y j ( t ) =
1 w ij ( t ) x i ( t ) = w
( t ) x ( t )
(3.16)
i
=
for
j
= 1, 2, ... , Q or
j
= N , N 1, ... , N Q + 1, where
w j ( t ) =
w 1 j ( t ) , ... , w Nj ( t ) T is the connection weights vector. Hence, Q = 1for
PCA and MCA (in this case, j
= N only).
3.3.1 Minor Subspace Analysis
The minor subspace (dimension M ) usually represents the statistics of the additive
noise and hence is referred to as the noise subspace in many fields of signal
processing, such as those cited in Section 2.2.1. Note that in this section, MSA
represents both the subspace and the minor unit eigenvectors.
The existing MSA networks are:
Oja's MCA algorithm . Oja [141] extended the OJA rule (2.16) as follows:
A
α ( t ) x ( t ) y j ( t )
w j ( t + 1 ) = w j
( t )
B
y j
T
j
+
α ( t )
( t ) + 1 w
( t ) w j ( t )
w j ( t )
C
ϑα ( t )
y i ( t ) y j ( t ) w i ( t )
(3.17)
> j
i
for j = N , N 1, ... , N M + 1. The corresponding averaging ODE is
given by
d
w j (
t
)
T
j
=− R w j ( t ) +
w
( t ) R w j ( t )
w j ( t ) + w j ( t )
dt
T
j
T
i
w
( t ) w j ( t ) w j ( t ) ϑ
> j w
( t ) R w j ( t ) w i ( t )
(3.18)
i
Search WWH ::




Custom Search