Information Technology Reference
In-Depth Information
Ta b l e 4
Results of feature integration among researchers
Professor
Feature
PaperNum
Train
Test
Network
G
Ecooc
0.470 0.413
G
Eoverlap
0.508 0.411
G
Jcooc
0.443 0.261
G
Joverlap
0.585 0.325
G
af filiation
0.178 -0.011
G
pro ject
0.540 0.043
G
ALL
0.821
0.417
Attributes ALL 0.491
0.448
Network
G
Ecooc
+A 0.514 0.429
+ Attributes
G
Eoverlap
+A 0.544 0.404
G
Jcooc
+A 0.481 0.284
G
Joverlap
+A 0.519 0.420
G
af filiation
+A 0.497 0.159
G
pro ject
+A
0.548 0.304
G
ALL
+A
0.811
0.456
a good correlation with target ranking. One might infer that researchers who are fa-
mous on Japanese web sites and who frequently co-occur with other researchers on
English-language web sites are the more creative researchers.
In the combination model, we also use Boolean type (
w
i
∈{
) opera-
tors to combine the relations. Using relations of six types to combine a network
G
af filiation
−
Ecooc
−
Eoverla p
−
Jcooc
−
Joverla p
−
pro ject
, we can create 2
6
1
,
0
}
1 (=63) types
of combination-relational networks (in which at least one type of relation exists).
We obtain network rankings in this combined network to learn and predict the tar-
get rankings. The top 50 correlations between network rankings in a combined-
relational network and target rankings are portrayed in Fig. 3. Results show that
degree centralities on combined-relational network produce good correlation with
target rankings. For instance, combining cooc relations on English-language web
sites with co-project relations (
G
0
−
1
−
0
−
0
−
0
−
1
), or combining a cooc relation and
overlap relations on English-language web sites with a cooc relation on Japanese
web sites (
G
0
−
1
−
1
−
1
−
0
−
0
) makes the networks more reasonable for use in predict-
ing a target ranking.
We execute our feature integration ranking model (with several variations) to
single and multiple relational social networks to train and predict rankings of re-
searchers'
Paper
. We use Ranking SVM to learn the ranking model which min-
imizes the pairwise training error in the training data. Then we apply the model
to predict rankings on training data (again) and on testing data. Table 4 presents
comparable results for models of several types. First, we integrate attribute indices
(i.e., hit number of names on the Japanese web sites and on English-language web
sites) of researchers as features, thereby producing a baseline of this model to learn
−