Graphics Reference
In-Depth Information
The above problem can be solved by defining its Lagrangian function as follows:
 
 
 
1
n
Lwb
(,,)
ʱ
=⇅ −
(
w w
)
ʱ
{ (
y w x
⇅ +−
)
b
] }
(4)
i
i
i
2
i
=
1
ʱ
ʱ ≥
0
where
denotes the Lagrange coefficient, and
. The partial derivative
i
i
about w and b can be obtained, and are set to zero:

 
n
n
y
ʱ
=−
0,
w
y
ʱ
x
= ≥
0,
ʱ
0
 
 
(5)
ii
iii
i
i
=
1
i
=
1
The Eq.(5) is substituted into the Eq.(4), and the original problem can be converted
n
y
ʱ
=
0
to maximize the following function with constraints
and
i
i
i
=
1
ʱ ≥=
0,
i
1, 2,...,
n
:
i
  (6)
n
1
n
max
Q
(
ʱ
)
=
ʱ
ʱʱ
y y
(
x
x
)
i
i
j
i
j
i
j
2
i
=
1
i j
,
=
1
is the optimal solution to the problem, then w can be obtained
ʱ
Assume that
i
n
. In fact, Eq. (6) can be considered as a quadratic programming
w
=
ʱ
y x
as
i
i
i
i
=
1
problem, and it has a unique solution. According to Kuhn-Tucker conditions, the solu-
tion of this optimization problem needs to satisfy the following equation:

ʱ
(
ywx b
(
⇅ +−= =
)
1)
0,
i
1,...,
n
(7)
i
i
i
 
ywx b
(
⇅ +−
)
1
For the non-support vectors,
in Eq.(8) will be greater than 0,
i
i
ʱ
and only those
corresponding to the support vectors are 0. After determining the
optimal solution w , any support vector is substituted into
i
 
ywx b
[(
⇅ +−=
)
]
1
0
i
i
and b can be obtained. To avoid error, the parameter b is usually set as
1
T
T
b
=−
y
ʱ
(
s
x
+
s
x
)
s ,
s denote support vectors, and
SV
, where
i
i
1
i
2
i
s
2
SVs
denotes the support vector sets. After parameters w and b are determined, the fol-
lowing optimal classification function can be obtained:
 
fx
() sgn{(
=
w x
⇅ +
)
b
}
 
n
(8)
=
sgn{
ʱ
yx x b
(
⇅ +
)
}
i
i
i
i
=
1
ʱ
value corresponding to the non-support vectors are 0, the summation
computation in Eq. (8) can be greatly reduced.
Since the
i
 
Search WWH ::




Custom Search