Information Technology Reference
In-Depth Information
T
H
T
(
x
)=
(
h
t
(
x
))
.
(1)
t
=1
The weak hypotheses often internally operate with discrete values corresponding
to partitions of the object space
χ
. Such weak hypotheses are called by Schapire
and Singer [9]
space partitioning
weak hypotheses. Moreover, the weak hypothe-
ses usually make their decision based only on a single image feature which is
either discrete (e.g. LBP) or is quantized (e.g. Haar-like features and a thresh-
old function). In the further text, such functions
f
:
χ
are reffered to in
general simply as
features
and the weak hypotheses are combinations of such
features and a
look-up table functions
l
:
→
N
N
→
R
h
t
(
x
)=
l
t
(
f
t
(
x
))
.
(2)
In the further text,
c
(
j
t
specifies the real value assigned by
l
t
to output
j
of
f
t
.
The decision strategy
S
of a soft cascade is a sequence of decision functions
S
=
S
1
,S
2
,...,S
T
,where
S
t
:
1. The decision functions
S
t
are evaluated
sequentially and the strategy is terminated with negative result when any of
the decision functions outputs
R
→
,
−
1. If none of the decision functions rejects the
classified sample, the result of the strategy is positive.
Each of the decision functions
S
t
bases its decision on the tentative sum of
the weak hypotheses
H
t
,
t<T
which is compared to a threshold
θ
t
:
S
t
(
x
)=
,
−
if
H
t
(
x
)
>θ
t
.
(3)
−
1
,
if
H
t
(
x
)
≤
θ
t
In this context, the task of learning a suppression classifier can be formalized
as learning a new soft cascade with a decision strategy
S
and hypotheses
h
t
=
l
t
(
f
t
(
x
)), where the features
f
t
of the original classifier are reused and only the
look-up table functions
l
t
are learned.
2.1 Learning Suppression with WaldBoost
Soft cascades can be learned by several different algorithms [1,2]. We chose the
WaldBoost
algorithm [11,13] by Sochman and Matas which is relatively simple
to implement, it guarantees that the created classifiers are optimal on the train-
ing data, and the produced classifiers are very fast in practice. The WaldBoost
algorithm described in the following text is a slightly simplified version of the
original algorithm. The presented version is specific for learning of soft cascades.
Given a weak learner algorithm, training data
{
(
x
1
,y
1
)
...,
(
x
m
,y
m
)
}
,x
∈
χ, y
and a target miss rate
α
, the WaldBoost algorithm solves a
problem of finding such decision strategy that its miss rate
α
S
is lower than
α
and the average evaluation time
T
S
=
E
(arg min
i
(
S
i
∈{−
1
,
+1
}
=
)) is minimal:
T
S
,s.t.
α
S
<α.
S
∗
=argmin
S
Search WWH ::
Custom Search