Information Technology Reference
In-Depth Information
of prototypes such as research presented in [4]and [25]. The other solution is
by detecting these data through the process of boosting, in which case we speak
about a good management of noise. According to the latest approach, we plan
to improve the proposed algorithm against the noisy data, by using neighboring
graphs or using update parameters.
Finally, a third perspective work aims at studying the Boosting with a weak
learner that generates several rules (Rule learning [10]). Indeed, the problem of
this type of learners is the production of conflicting rules within the same itera-
tion of boosting. These conflicting rules will have the same weights (attributed
by the boosting algorithm). In the voting procedure, we are thinking about a
combination of the global weights ( those attributed by the boosting algorithm)
and the local weights (those attributed by the learning algorithm).
References
1. Vezhnevets, A., Vezhnevets, V.: Modest adaboost: Teaching adaboost to generalize
better, Moscow State University (2002)
2. Bauer, E., Kohavi, R.: An empirical comparison of voting classification algorithms:
Bagging, boosting, and variants. Machine Learning 24, 173-202 (1999)
3. Breiman, L.: Bagging predictors. Machine Learning 26, 123-140 (1996)
4. Brodley, C.E., Friedl, M.A.: Identifying and eliminating mislabeled training in-
stances. In: AAAI/IAAI, vol. 1, pp. 799-805 (1996)
5. Dharmarajan, R.: An effecient boosting algorithm for combining preferences. Tech-
nical report, MIT (September 1999)
6. Dietterich, T.G.: An experimental comparison of three methods for constructing
ensembles of decision trees: bagging, boosting, and randomization. Machine Learn-
ing, 1-22 (1999)
7. Dietterich, T.G.: Ensemble methodes in machine learning. In: First International
Workshop on Multiple ClassifierSystems, pp. 1-15 (2000)
8. Domingo, C., Watanabe, O.: Madaboost: A modification of adaboost. In: Proc.
13th Annu. Conference on Comput. Learning Theory, pp. 180-189. Morgan Kauf-
mann, San Francisco (2000)
9. Friedman, J., Hastie, T., Tibshirani, R.: Additive logistic regression: a statistical
view of boosting. Dept. of Statistics, Stanford University Technical Report (1998)
10. Friedman, J.H., Popescu, B.E.: Predictive learning via rule ensembles (technical
report). Stanford University (7) (2005)
11. Kohavi, R.: A study of cross-validation and bootstrap for accuracy estimation
and model selection. In: International Joint Conference on Artificial Intelligence
(IJCAI) (1995)
12. Littlestone, N., Warmuth, M.K.: The weighted majority algorithm. In: Information
and computation, vol. 24, pp. 212-261 (1994)
13. Maclin, R.: Boosting classifiers regionally. In: AAAI/IAAI, pp. 700-705 (1998)
14. McDonald, R., Hand, D., Eckley, I.: An empirical comparison of three boosting
algorithms on real data sets with artificial class noise. In: Fourth International
Workshop on Multiple Classifier Systems, pp. 35-44 (2003)
15. Meir, R., El-Yaniv, R., Ben-David, S.: Localized boosting. In: Proc. 13th Annu.
Conference on Comput. Learning Theory, pp. 190-199. Morgan Kaufmann, San
Francisco (2000)
Search WWH ::




Custom Search