Graphics Reference
In-Depth Information
attributes (input attributes) and the condition attributes as the decision attributes, so
SVM regression can be used to predict the missing condition attribute values. SVM
regression estimation seeks to estimate functions
n
f
(
x
) = (
wx
) +
b
,
w
,
x
∈ R
,
b
∈ R
(4.32)
based on data
(
x 1 ,
y 1 ), . . . , (
x l ,
y l ) ∈ R × R
(4.33)
by minimizing the regularized risk functional
2
R emp
W
/
2
+
C
(4.34)
where C is a constant determining the trade-off between minimizing the training
error, or empirical risk
l
1
l
R emp =
1 |
y i
f
(
x i ) | ε
(4.35)
i
=
2 . Here, we use the so-called
and the model complexity term
W
ε
-insensitive loss
function
|
) | ε =
y
f
(
x
max
{
0
, |
y
f
(
x
) |− ε }
(4.36)
The main insight of the statistical learning theory is that in order to obtain a small risk,
one needs to control both training error and model complexity, i.e. explain the data
with a simple model. The minimization of Eq. ( 4.36 ) is equivalent to the following
constrained optimization problem [ 17 ]: minimize
l
1
2
C 1
l
( ) ) =
1 i + ξ i )
2
τ(
w
w
+
(4.37)
i
=
subject to the following constraints
((
w
x i ) +
b
)
y i ε + ξ i
(4.38)
) ε + ξ i
y i ((
w
x i ) +
b
(4.39)
ξ ( )
i
0
0
(4.40)
As mentioned above, at each point x i we allow an error of magnitude
ε
.Errors
ξ (see constraints 4.38 and 4.39 ). They
above
ε
are captured by the slack variables
 
Search WWH ::




Custom Search