Information Technology Reference
In-Depth Information
η
which holds true with probability at least 1
simultaneously for all functions in the
set, since it holds for the worst. Hence, in particular it holds true for the function
ω
I .
And one gets the bound (13).
For the theorems to follow, we denote the right-hand-side in the Vapnik bound by
V = R emp (
I )+ ( ln N
ω
ln
η
) / ( 2 I ) .
Theorem 1. Let
} ω j Ω ,j = 1 , 2 ,..., N, be a finite set of indicator functions
(classification task) of size N. Then, for any
{
Q ( z ,
ω j )
η
> 0 , arbitrarily small, there is a small
number
n
k
(
n
k = 1
1 ) k ( 2
) k ,
α
(
η
, n )=
η
η
(14)
and the number
, I , N , n )= 2 n
n
1 + 1 ln N
ln
η
ε
(
η
2 I
+ n + n
n
ln
η
, (15)
1
2 I
such that:
P
|
V
C
|≤ ε ( η , I , N , n )
1
α ( η , n ) .
(16)
Before we prove theorem 1, the following two remarks should be clear.
Remark 1. The value of
α
(
η
, n ) is monotonous with
η
. I.e. the smaller
η
we choose,
the smaller
α
(
η
, n ) becomes as well. Therefore the minimum probability measure 1
(
, n ) is suitably large.
α
η
η
) k
n
k
(
n
k = 1
1 ) k ( 2
lim
η
η
0 +
η
) k
n
k
(
n
k = 0
1 ) k ( 2
= lim
η
+ 1
η
0 +
η
) n
1
= 0 .
= lim
η 0 +
+ 1
( 1
2
η
Remark 2. For the fixed values of
η
, N , n ,thevalueof
ε
(
η
, I , N , n ) converges to zero as
thesamplesize I grows large.
 
Search WWH ::




Custom Search