Information Technology Reference
In-Depth Information
To create such optimal strategy, WaldBoost combines AdaBoost [9] and Wald's
sequential probability ratio test . AdaBoost iteratively selects the most informa-
tive weak hypotheses h t .Thethreshold θ t is then selected in each iteration such
that as many negative training samples are rejected as possible while asserting
that the likelihood ratio estimated on training data
R t = p ( H t ( x )
|
y =
1)
(4)
p ( H t ( x )
|
y =+1)
R t
1
satisfies
α .
To learn the suppression classifiers we follow the classifier emulation approach
from [13] which considers an existing detector a black box producing labels
for new WaldBoost learning problem. However, when learning the suppression
classifiers, the algorithm differs in three distinct aspects.
The first change is that when learning new weak hypothesis h t , only the look-
up table function l t is learned, while the feature f t is reused from the original
detector. The selection of optimal weak hypothesis is generally the most time
consuming step in WaldBoost and restricting the set of features thus makes
learning the suppression classifier very fast.
The second difference is that the new data labels are obtained by evaluating
the original detector on different image position than where the newly created
classifier gets information from (the position containing the original features l t ).
This corresponds to the fact that we want to predict response of the detector in
neighborhood of the evaluated position.
The final difference is that the set of training samples is pruned twice in each
iteration of the learning algorithm. As expected, samples rejected by the new
suppression classifier must be removed from the training set. In addition, samples
rejected by the original classifier must be removed as well. This corresponds to
the behavior during scanning when only those features which are needed by the
detector to make decision are computed. Consequently, the suppression classifiers
can also use only these computed features to make their own decision. The whole
algorithm for learning suppression classifier is summarized in Algorithm 1.
The neighborhood position is suppressed only when the suppression soft cas-
cade ends with
1 decision. This way, the largest possible miss rate introduced
by the suppression mechanism equals to α . The previous statement also holds
when the detector is accompanied with multiple suppression classifiers which
allows even higher sped-up still with controlled error.
Also, multiple neighboring position can be suppressed by a single classifier.
Such behavior requires only slight change in Algorithm 1, where the training
labels now become positive when the original detector gives positive result at
any of the positions which should be suppressed.
2.2 Suppression in Real-Time Scanning Windows
The suppression with classifiers which reuse discrete-valued features is especially
well suited for wide processor and memory architectures. On those architectures,
 
Search WWH ::




Custom Search