Database Reference
In-Depth Information
1.2.1 Finding Patterns in Feature Space
To reinforce the idea that the feature mapping need not be explicit we give
examples of how to perform some elementary and often-used calculations in
feature space, only using the information provided via the kernel function.
The basic relations we measure in the feature space also form the basis of
classical linear algorithms from statistics. At the end of this section, we will
outline how a linear classifier can be built using dual representation.
Given a finite subset S =
{
x 1 ,..., x }
of an input space X and a kernel
κ ( x , z ) satisfying
κ ( x , z )= φ ( x ) ( z )
where φ is a feature map into a feature space F ,let φ ( S )=
}
be the image of S under the map φ . Hence φ ( S ) is a subset of the inner
product space F . Just considering the inner product information contained
in the kernel matrix K , significant information about the embedded data set
φ ( S ) can be obtained. The element
{
φ ( x 1 ) ,...,φ ( x )
K ij = κ ( x i , x j ) ,
, j =1 ,...,
is a general entry in the kernel matrix.
Working in a kernel-defined feature space means that we are not able to ex-
plicitly represent points but despite this handicap there is a surprising amount
of useful information that can be obtained about φ ( S ).
Norm of Feature Vectors. The simplest example of this is the evaluation
of the norm of φ ( x ); it is given by
2 =
2 =
= κ ( x , x ) .
φ ( x )
φ ( x )
φ ( x ) ( x )
The norms of linear combinations of images in the feature space can be
evaluated with the following
=
α j φ ( x j )
2
α i φ ( x i )
α i φ ( x i ) ,
i =1
i =1
j =1
=
α i
α j
φ ( x i ) ( x j )
i =1
j =1
α i α j κ ( x i , x j ) .
=
i,j =1
Search WWH ::




Custom Search