Database Reference
In-Depth Information
in
, we need to employ Eq. ( 10.23 ), which expresses the dot product of features in
terms of the kernel k evaluated on input patterns,
ˇ
x )=(
x )
k
(
x
,
x
·
(10.30)
We thus obtain decision functions of more general form,
sgn m
i = 1 ʽ i · ( ˆ ( x ) · ˆ ( x i ))+ b
f
(
x
)=
(10.31)
sgn m
i = 1 ʽ i · k ( x , x i )+ b
=
(10.32)
By this definition, it is more efficient to use the kernel to obtain the dot product
in H since
will be very expensive to compute if H is high-dimensional.
To this end, the Gaussian radial basis function (GRBF),
ˆ (
x
) · ˆ (
x i
)
x
x
2
x )=
k
(
x
,
exp
( ʳ
)
(10.33)
is utilized as a similarity measure.
As discussed in Fig. 10.5 , the input x and the support vector x i are nonlinearly
mapped (by
) into a feature space H , where dot products are computed. Practically,
those two layers are computed in one single step by the use of the kernel k .The
results are linearly combined by weights
ˆ
ʽ i , found by solving a quadratic problem.
The linear combination is fed into the function
.
Figure 10.6 shows an example of a two-dimensional plot of data samples
obtained from the database in the experiment. It shows a two-dimensional feature
space where each sample is labeled as positive or negative according to one query
concept. It can be observed that although the data is only two dimensional, the
problem is not a linear separable case. The application of a non-linear GRBF kernel
function is therefore more appropriate for performing nonlinear mapping for the
SVM classifier, as compared to other linear functions.
˃ ( · )
10.4.3
Implementation of Support Vector Machine
To construct the optimum hyperplane [cf. Eq. ( 10.28 )], we can solve the following
optimal problem:
2
1
2
minimize
w , b
w
(10.34)
subject to y i · ((
w
·
x i )+
b
)
1
,
i
=
1
,...,
m
(10.35)
Search WWH ::




Custom Search