Database Reference
In-Depth Information
Here, the use of negative samples becomes essential, as the RBF centers should be
moved slightly away from these clusters. Shifting the centers reduces the similarity
scores for those negative samples, and thus more favorable similarity scores can be
obtained for any positive samples that are in the same neighborhood area, in the next
round of retrieval.
Recall that the set of negative samples is denoted by
N n
X = {
x i }
i = 1 , and N n is
the number of these samples. At the n -th iteration, let the input vector x (randomly
selected from the negative set) be the closest point to z i , such that:
( x
z i M ) ,
i =
arg min
i
i
∈{
1
,...,
N m }
(2.62)
Then, the center z i is modified by the anti-reinforce learning rule:
x
z i (
n
+
1
)=
z i (
n
) ʷ (
n
)[
z i (
n
)]
(2.63)
where
is a learning constant which decreases monotonically with the number of
iteration, and 0
ʷ
< ʷ <
1
.
The algorithm is repeated by selecting a new sample from
N n
i
x i }
the set of input samples,
{
1 , and stops after a maximum number of iterations
=
is reached.
Table 2.11 summarizes the learning procedure of the ABRF network for image
retrieval. This includes learning steps explained in Sects. 2.4.3.1 - 2.4.3.3 .
2.4.4
Gradient-Descent Procedure
Apart from the ARBFN model, the procedural parameters for RBF can be obtained
by a gradient-descent procedure [ 39 , 40 ]. This procedure is employed to optimize all
three parameters, z i , ˃ i ,
ʻ i for each RBF unit. Here, all training samples (both
positive and negative) are assigned to the RBF centers, and the linear weights are
used to control the output of each RBF unit. Thus, the mapping function becomes:
and
i = 1 ʻ i exp
N
i = 1 ʻ i K ( x , z i )=
N
2
M
x
z i
f
(
x
)=
(2.64)
i
2
˃
N
i = 1 = X + ∪ X . During relevance feedback learning, the network
attempts to minimize the following error function:
where
{
z i }
y j
2
ʾ f =
N
j = 1 e j =
N
j = 1
N
i = 1 ʻ i K ( x j , z i )
1
2
1
2
(2.65)
where e j is the error signal for the training sample x j , and y j represents the desired
output of the j -th training sample. The network parameters can be obtained by the
 
Search WWH ::




Custom Search