Information Technology Reference
In-Depth Information
Neural Networks This is a mathematical model inspired by biological neural
networks. A neural network consists of an interconnected group of artificial neurons,
and it processes information using a connectionist approach to computation. In most
cases a neural network is an adaptive system, changing its structure during a learning
phase. Neural networks are used for modeling complex relationships between inputs
and outputs or to find patterns in data. This machine learning method is used
widely in the object detection world because it provides: (a) generalization (small
distortions can be handled easily); (b) expandability (learning a different set of
objects will require hardly any change to the structure of the program); (c) the
ability to represent multiple samples (a class of objects can easily be represented
by multiple samples under multiple conditions); and (d) efficiency (once trained,
the network determines in one single step to what class the object belongs). The
downside of this method is that they require a large and diverse set of training
examples, as well as demanding processing and storage resources. Small recognition
systems, though, should benefit from all the advantages of using a neural network
as a classifier [ 22 ].
Adaptive Boosting AdaBoost is an adaptive machine learning algorithm, in the
sense that subsequent classifiers built are tweaked in favor of those instances
misclassified by previous classifiers. AdaBoost generates and calls a new weak
classifier in each of a series of rounds. For each call, a distribution of weights
is updated that indicates the importance of the examples in the dataset for the
classification. On each round, the weights of each incorrectly classified examples
are increased, and the weights of each correctly classified example are decreased,
so the new classifier focuses on the examples which have so far eluded correct
classification. Even though AdaBoost can be sensitive to noisy data, its efficiency is
what has drawn many researchers towards using it [ 4 , 16 ].
SVM Support Vector Machines are based on the concept of decision planes that
define decision boundaries (Fig. 4.5 ). A decision plane is one that separates between
a set of objects having different class memberships. The basic SVM takes a set of
input data and predicts, for each given input, which of two possible classes forms the
output, making it a non-probabilistic binary linear classifier. Given a set of training
examples, each marked as belonging to one of two categories, a SVM training
algorithm builds a model that assigns new examples into one category or the other.
A SVM model is a representation of the examples as points in space, mapped so
that the examples of the separate categories are divided by a clear gap that is as
wide as possible. New examples are then mapped into that same space and predicted
to belong to a category based on which side of the gap they fall on. If the sets of
objects can be classified into their respective groups by a line the SVM is linear.
Most classification tasks, however, are not that simple, and often more complex
structures are needed in order to make an optimal separation. Using different set
of mathematical equations called kernels , SVM can try to rearrange the input data
so that the gap between different classes is as wide as possible and the separation
line can clearly be drawn between the classes. SVM is very powerful and easy to
understand tool, which explains its popularity [ 13 , 26 ].
Search WWH ::




Custom Search