Image Processing Reference
In-Depth Information
based on an object [ 1 ]. While the object localization means finding the object location and scale
an image.
Many of the object detection algorithms are following the model of detection by parts,
which was introduced by Fischler and Elschlager [ 2 ] . They used the object structural modeling
and reliable part detectors' methods. The basic idea behind this model is to identify that the
individual parts of an object detector are easier to build than that for the full object [ 3 , 4 ] . Actu-
ally, these methods of object detection are depending on sliding a window or template mask
through the image to classify each object falls in the local windows of background or target
[ 5 , 6 ] . In fact, this approach has successfully used to detect rigid objects such as cars and faces,
and has even applied to articulated objects such as pedestrians [ 7 - 9 ].
Later, a frequency model is proposed, which is dependent on a moving background con-
taining repetitive structures. The authors considered special temporal neighborhoods of the
pixels, which they have applied local Fourier transforms in the scene [ 10 ] . The feature vectors
are generated to build a background model. However, they are applied their model for mov-
ing object and backgrounds, on both synthetic and real image sequences [ 11 ] .
On the other hand, another popular approach is depending on extracting the local interest
points through the image and then classifies the regions, which contained these points, instead
of looking at all possible sub-windows, as in the previous [ 12 ] . The greatest common divisor
of the above approaches is that they can fail when the regional image information is insui-
cient (target is very small or unclear), and this is considered as a weakness of them [ 13 ].
In this way, the image matching based on features is depending on analyzing the extracted
features and find the corresponding relationship between them [ 12 ] . The image matching is
not accurate enough because the images are often noisy, in different illuminations and scales.
Recently, extracted features are widely applied in the field of object matching. In 1999, the
scale invariant feature transforms (SIFT) presented by Lowe, when a robust descriptor and
diference-of-Gaussians detector was used [ 14 , 15 ] . It is interesting to note that the advantages
of SIFT; that it is applied on invariant rotation or image scale, is about its computation, which
is very hard to calculate and take a time because it needs to extract 128 dimensional descriptors
to work [ 16 ] . This problem was solved in 2008 by Bay, who proposed speeded up to robust
features (SURF); 64 dimensions modeled it. The experiments of SURF have assumed the in-
tegrated images to compute a rough approximation of the Hessian matrix, and this is tended
faster than SIFT [ 17 , 18 ] . In 2009, Lue and Oubong compared SIFT and SURF; they have poin-
ted out that SURF is beter in performance but is not eicient in rotational changes [ 19 ] . In fact,
the effective power of SURF has been reduced because the ignoring of features in geometric
relationships [ 19 , 20 ] .
This chapter presents proposed algorithm depending on the object geometrical shape, and
relationship between outer points of the objects' contours. It is divided into two parts; one is
constructing an own signature for any object in an image and the other is matching operation
among all object shapes' signatures to get exactly which they are described. In addition, four
steps have to process through these parts. Firstly, constructing signatures for all objects in an
image and saving them as data in the system. Secondly, constructing signatures for all test in-
put objects. The comparisons between inputs and saved signatures, which they have determ-
ined before, have operated using statistical methods in the third step. Finally, these signatures
have used to detect and define the objects in the image.
In fact, the proposed approach introduces an idea to detect an object dependent on its outer
shape by constructing an own signature, which let the object to be free from constraints such
as rotation, size, and its position in the image. This proposed idea will may use in many ields
like identifying the kinds of plants, fruits, and any other objects based on their shapes. The
Search WWH ::




Custom Search