Graphics Reference
In-Depth Information
and they must satisfy Σw
. herefore, w is obtained byfinding
the eigenvector associated with the leading eigenvalue α .Forthesecondprincipal
component,welookforaunitvector w whichisorthogonaltow andmaximizesthe
variance of the projection of X along w .hatis,intermsofaLagrangianproblem,
we solve for α , w and β in the following optimization formula:
max
α ,β,w
=
α w ,andw
w
=
w
w
w
Σw
α
(
w
)−
β
(
w
)
.
( . )
Using a similar procedure, we are able to find the leading principal components se-
quentially.
Assume for simplicity that the data
are already centered on their
mean, and so the sample covariance matrix is given by Σ n
x ,...,x n
n
j = x j x
n.Byap-
plying the above sequential procedure to the sample covariance Σ n ,wecanobtain
the empirical principal components.
For KPCA using the feature representation ( . ), the data mapped in the feature
space
=
j
H
κ are
γ ,...,γ n
. he sample covariance (which is also known as a covari-
ance operator in
H
κ )isgivenby
n
j =
n
C n
=
(
γ j
γ
)(
γ j
γ
)
( . )
where f
κ .
Applying similar arguments to before, we aim to find the leading eigencomponents
of C n .hatis,wesolveforh in the following optimization problem:
g is a linear operator defined by
(
f
g
)(
h
)=
g, h
H κ f for f , g, h
H
max
h H κ
h, C n h
H κ subject to
h
=
.
( . )
H κ
n
j = β j γ j
It can be shown that the solution to this is of the form h
=
H
κ ,whereβ j 's
are scalars. As
n
i, j =
n
n
β
h, C n h
=
β i β j
γ i , C n γ j
=
K
I n
n ,
H κ
H κ
n
H κ
β
and
h
=
Kβ,whereK
=
κ
(
x i , x j
)
denotes the n
n kernel data matrix, the
optimization problem can be reformulated as
n
n
β
n subject to β
max
β
K
I n
=
.
( . )
n
R n
he Lagrangian of the above optimization problem is
n
n
max
α
R n β
K
I n
n
α
(
β
)
,
n
R,β
where α is the Lagrange multiplier. Taking derivatives with respect to the β's and
setting them to zero, we get
n
n
n
n
K
I n
n
=
αKβ ,or
I n
=
nαβ.
( . )
n
n
Search WWH ::




Custom Search