Information Technology Reference
In-Depth Information
Except for the feature-centric evaluation, other ways to exploit the shared in-
formation are possible. Image features could be selected in such way that they are
reused as much as possible when scanning the image. This approach, however,
requires more complex learning methods. Alternatively, response of classifier at
one position could be used as starting point (or as a feature) at neighboring
location. Such access to previous results should provide good initial guess as
the responses of classifiers at neighboring positions are highly correlated. How-
ever, this approach would also increase complexity of the learning system and
would most likely require iterative retraining of the classifier which would signif-
icantly prolong the learning. On the the hand, the proposed approach of learning
suppression classifiers can be used with existing detectors and the suppression
classifiers are learned much faster than the original detector.
The suppression of some positions could be especially beneficial for some types
of detectors and on certain computationalplatforms.Iffeatures that need nor-
malization are used (e.g. Haar-like features and other linear features), suppress-
ing some positions removes the need of possibly expensive computation of the
local normalization coecient. Also, on some platforms, the suppression could
lead to faster execution as possibly deep computational pipeline does not have
to be started for some positions.
The proposed neighborhood suppression method is presented in detail in Sec-
tion 2 together with an algorithm able to learn the suppression classifiers. Re-
sults achieved by this approach are shown and discussed in Section 3. Finally,
the paper is summarized and conclusions are drawn in Section 4.
2 Learning Neighborhood Suppression
As discussed before, we propose to learn classifiers suppressing evaluation of de-
tection classifiers in the neighborhood of the currently examined image window.
Such approach can improve detection speed only if the suppressing classifiers re-
quire very low overhead. This can be achieved by reusing computations already
performed by the detection classifier itself. Most naturally, these reused com-
putations can be responses of image features which are part of most real-time
detectors [14,10,11,1,2,4,6,15,13]. In our work, the focus is only on these real-
time detectors as they are the hardest to further speed up and speed of slower
detectors can be improved by already known techniques [12,13].
The amount of information carried by the reused features, which is relevant
to the decision task at neighboring location, will surely vary with different types
of features and objects. It will also decrease with the distance of the two areas
as the mutual overlap decreases.
In the further text, it is assumed that the detector for which the neighborhood
suppressing classifier needs to be learned is a soft cascade [1]. This does not limit
the proposed approach as extending it to detectors with different attentional
structures is straightforward and trivial.
The soft cascade is a sequential decision strategy based on a majority vote of
simple functions h t : χ
R
which are called weak hypotheses in the context of
boosting methods [8]:
 
Search WWH ::




Custom Search