Information Technology Reference
In-Depth Information
where h is the Vapnik-Chervonenkis dimension .
It has been shown [2] that
I
k
h
k = 0
(Jensen)
H Ω ( I )
H ann ( I )
G Ω ( I )
ln
ln eI
h
h
= h ( 1 + ln I
h ) . (44)
And the right-hand-side of (44) can be suitably inserted in the bounds to replace ln N .
We mention that appropriate generalizations from the set of indicator functions (clas-
sification) onto sets of real-valued functions (regression estimation) can be found in [2]
and are based on the notions of:
ε
-finite net , set of classifiers for a fixed real-valued f ,
complete set of classifiers for
Ω
.
3.1
Classification Learning Task (Infinite Set of Functions)
For shortness, we give only two theorems for bounds on V
C and C
V , the bound on
|
V
C
|
is their straightforward consequence (analogically as in previous sections).
Theorem 7. Let
} ω Ω be an infinite set of indicator functions with finite
Vapnik-Chervonenkis dimension h. Then, with probability 1
{
Q ( z ,
ω
)
α
(
η
, n ) or greater, the
following inequality holds true:
1 h ( 1 + 2 h )
n
n
ln 4
1
C
V
I
1
+ n + n
n
ln
η
. (45)
2 I
Theorem 8. With probability 1
α
(
η
, n ) or greater, the following inequality holds true:
1 + 1 h ( 1 + 2 h )
2 n
n
ln 4
V
C
I
+ n
ln
η
. (46)
2 I
3.2
Regression Estimation Learning Task (Infinite Set of Functions)
Again, for shortness, we give only two theorems for bounds on V
C and C
V ,the
bound on
|
V
C
|
is their straightforward consequence (analogically as in previous sec-
tions).
 
Search WWH ::




Custom Search