Database Reference
In-Depth Information
gradient-descent method to minimize the cost function, i.e.,
N
i
{
z i , ˃ i , ʻ i }
=
arg min
( ʾ )
(2.66)
=
1
The learning procedure starts with the initialization of the linear weights:
1
,
if z i is conducted from positive sample
=
ʻ
(2.67)
i
0
.
5
,
if z i is conducted from negative sample
and the RBF widths:
min z i
z j M ,
˃ i = ʴ
j
∈{
1
,
2
,...,
N
}
i
=
j
(2.68)
Based on the gradient-descent method, the parameters for the i -th RBF unit are
updated in the iterative process, as follows:
1. For t
=
1
,
2
,...,
N max :
2. Update
) ʷ 1 ∂ʾ (
t
)
ʻ i (
t
+
1
) ʻ i (
t
(2.69)
∂ʻ i (
t
)
where ∂ʾ ( t )
∂ʻ i (
N
j
) =
1 e j (
t
)
K
(
x j ,
z i )
=
t
3. Update
) ʷ 2 ∂ʾ (
t
)
z i
(
t
+
1
)
z
(
t
(2.70)
z i (
t
)
(
) )
M
x j
z i (
t
where ∂ʾ ( t )
N
j
) = ʻ i (
t
)
1 e j (
t
)
K
(
x j ,
z i )
=
z i (
t
i (
˃
t
)
4. Update
) ʷ 3 ∂ʾ (
t
)
2
i
2
i
˃
(
t
+
1
) ˃
(
t
(2.71)
∂˃ i (
t
)
t M
) (
x j z i ( t ) )
(
x j z i ( t ) )
where ∂ʾ ( t )
N
∂˃ i ( t ) = ʻ
(
t
)
j = 1 e j
(
t
)
K
(
x j
,
z i
i
i ( t )
˃
5. Return
where N max is the maximum iteration count, and
ʷ 3 are the step sizes.
The adjustment of the RBF models proceeds along many relevance feedback ses-
sions. The training samples are gathered from the first to the last retrieval sessions,
and only selective samples are used to retrain the network. In each feedback session,
newly retrieved samples which have not been found in the previous retrieval are
inserted into the existing RBF network. In the next iteration, the updating procedure
is performed on the newly inserted RBF units, thus improving training speed.
ʷ 1 , ʷ 2 ,
and
 
Search WWH ::




Custom Search