Information Technology Reference
In-Depth Information
The IAdaBoost approach also contributes to improve AdaBoost against its
speed of convergence. In fact, the basic idea of the improvement is the modifica-
tion of the theorem [18]. This modification is carried out in order to integrate
the risk of Bayes. The effects of this modification are a faster convergence to-
wards the optimal risk and a reduction of the number of weak hypotheses to
build. Finally, RegionBoost [13] is a new weighting strategy of each classifier.
This weighting is evaluated at the voting time by a technique based on K Nearest
Neighbors of the example to label. This approach makes it possible to specialize
each classifier on areas of the training data.
3.3
Boosting by Exploiting Former Assumptions
To improve the performance of AdaBoost and to avoid forcing it to learn either
from the examples that contain noise, or from the examples which would become
too dicult to learn during the process of Boosting, we propose a new approach.
This approach is based on the fact that for each iteration, Adaboost, builds
hypotheses on a defined sample, it makes its updates and it calculates the error
of training according to the results given only by these hypotheses. In addition,
it does not exploit the results provided by the hypotheses already built on other
samples to the former iterations. This approach is called AdaBoostHyb
Program Code
- Input X 0 to classify
- S =( x 1 ,y 1 ) , ....., ( x n ,y n )Sample
- For i=1,n Do
- p 0 ( x i )=1 /n ;
-EndFOR
- t ← 0
- While t ≤ T Do
-Learningsamp e S t from S with probabilities p t .
-
Build a hypotheses h t on S t with weak learning A.
t apparent error of h t on S with t = weight
-
of
examples
such that argmax ( i =1 α i h i ( x i )
= y i ). α t =1 / 2 ln ((1
t ) / t ).
-
For i=1, m Do
if argmax ( i =1 α i h i ( x i )) = y i (correctly
( p t ( x i ) /Z t ) e α t
-
P t +1 ( x i )
classified)
P t +1 ( xi )
i =1 α i h i ( x i ))
( p t ( x i ) /Z t ) e + α t
if argmax (
= y i (badly
classified)
( Z t normalized to i =1 p t ( x i )=1)
-EndFor
t
t +1
-
End While
-
Final hypotheses :
H ( x )= argmax y
Y t =1 α t
Search WWH ::




Custom Search