Database Reference
In-Depth Information
based on the predicted rating from V . This process conforms to real scenarios as in
our previous example, where Linda's office mate influences Linda who further
influences Angela. Followed by this intuition, we decide to apply an iterative
classification method [ 22 - 24 ] for distant friend inference.
Iterative classification is an approximation technique for classifying relational
entities. This method is based on the fact that relational entities are correlated with
each other. Estimating the classification of an entity often depends on the classifi-
cation estimations of its neighbors. The improved classification of one entity will
help to infer the related neighbors and vice versa. Unlike traditional data mining
which assumes that data instances are independent and identically distributed
(i.i.d.) samples and classifies them one by one, iterative classification iteratively
classifies all the entities in the testing set simultaneously because the classifications
of those entities are correlated. Note that iterative classification is an approximation
technique, because exact inference is computationally intractable unless the net-
work structures have certain graph topologies such as sequences, trees, or networks
with low tree width. In previous research, iterative classification has been used
successfully to classify company profiles [ 23 ], hypertext documents [ 22 ], and
emails [ 25 ].
The pseudo-code for distant friend inference is shown in Table 4.1 . This pseudo-
code predicts the users' ratings for each target item at a time. The original iterative
classification method classifies the whole network of users. However, since the
number of users in social networks is usually large, we reduce the computation
cost by limiting the inference to a user set N which includes the target users of the
target item I , and their corresponding immediate friends. In each iteration, we
generate a random ordering O of the users in N . For each user U in O ,if U has no
immediate friend who belongs to U ( I ), which is the set of users whose rating (either
Table 4.1 Pseudo-code for distant friend inference
1. For each item I in the testing set do
2. Select a set of users
N
for inference.
N
includes the target users of item
I
and their
corresponding immediate friends.
3. For iteration from 1 to M do
4. Generate a random ordering,
O
, of users in
N
5. For each user U in O do
6. If
U
has no immediate friend who exists in
U (I)
7. Continue
8. Else
9. Apply immediate friend inference
10. R UI = S k k *Pr ( R UI = k | A=a U , A '= a ' I , { R VI =r VI : " U (I)« N (U) )})
11. Insert into or Update
U (I)
with
R UI if different
12. End If
13. End For
14. If no updates in the current iteration
15. Break
16. End If
17. End For
18. Output the final predictions for the target users
19.End For
 
Search WWH ::




Custom Search