Information Technology Reference
In-Depth Information
indicates that 80% performs well on NUS-WIDE-USER50. By preserving 80%
energy of the singular values, r U =
25, r I =
=
105 and r T
18.
control how much the tensor decomposition
incorporates the information of affinity intrarelations. We keep r U
The regularization terms
ʱ
and
ʲ
=
25, r I
=
105,
and r T
=
18. Figure 2.4 b shows the impacts of
ʱ
and
ʲ
on the average F-score.
ʱ =
001 achieves the best result. From the results, we can see
that the performance is more sensitive to the regularization weights than to the rank
numbers. The poor performances when
0
.
01 and
ʲ =
0
.
ʱ =
0or
ʲ =
0 confirm with the intuition
that purely affinity constraints or l
1 norm constraints cannot generate good latent
factors. For the remaining experiment, we select r U
=
25, r I
=
105, r T
=
18,
ʱ =
0
.
01, and
ʲ =
0
.
001.
2.4.3 Performance Comparison
To compare the performances, five algorithms as well as the original tags are
employed as the baselines:
Original tagging (OT): the original user-generated tags.
Random walk with restart (RWR): the tag refinement algorithm based on random
walk [ 39 ].
Tag refinement based on visual and semantic consistency (TRVSC, [ 20 ]).
Multiedge graph (M-E Graph): a unified multiedge graph framework for tag
processing proposed in [ 23 ].
Low-Rank approximation (LR): tag refinement based on low-rank approximation
with content-tag prior and error sparsity [ 47 ].
Multiple correlation Probabilistic Matrix Factorization (MPMF): the tag refine-
ment algorithmby simultaneouslymodeling image-tag, tag-tag, and image-image
correlations into a factor analysis framework [ 19 ].
In addition, we compared the performances of the proposed approach with four
different settings: (1) TF without smoothness constraints, optimization under the 0/1
scheme (TF_0/1), (2) TF with multicorrelation smoothness constraints, optimization
under the 0/1 scheme (MTF_0/1), (3) TF without smoothness constraints, optimiza-
tion under the ranking scheme with negative set constructed as Eq. ( 2.14 ) (TF_rank),
and (4) TF with multicorrelation smoothness constraints, optimization under the
ranking scheme with negative set constructed as Eq. ( 2.10 ) (RMTF).
Table 2.3 lists the average performances for different tag refinement algorithms.
It is shown that RWR fails on the noisy web data. One possible reason is that the
Table 2.3 Average performances of different algorithms for tag refinement
OT RWR TRVSC M-E Graph LR MPMF TF_0/1 MTF_0/1 TF_rank RMTF
F-score 0.477 0.475 0.490
0.530
0.523 0.521
0.515
0.542
0.531
0.571
 
 
Search WWH ::




Custom Search