Information Technology Reference
In-Depth Information
N
U
,
S
(
u
i
,
1
)
+
ʱ
ʻ
−
1
N
U
(
u
i
)
+
2
ʱ
ʻ
−
1
N
U
,
S
,
Z
(
u
i
,
1
,
z
i
)
+
ʱ
ʩ
−
1
N
U
,
S
(
u
i
,
1
)
+
T
ʱ
ʩ
−
1
s
i
s
w
−
u
i
,
z
i
(4.2)
p
(
=
1
|
i
,
,
·
)
∝
·
N
U
,
C
,
S
,
Z
(
u
i
,
c
i
,
0
,
z
i
N
U
,
Z
(
c
i
,
z
i
)
+
ʱ
ʳ
)
+
ʱ
ʩ
−
1
p
(
c
i
|
c
w
−
i
,
s
i
=
0
,
u
i
,
·
)
∝
·
N
U
,
S
,
Z
(
u
i
,
0
,
z
i
N
U
(
c
i
)
+|
C
u
i
|
)
+
T
ʱ
ʩ
−
1
(4.3)
N
U
,
Z
(
c
i
,
z
i
N
Z
,
W
(
z
i
)
+
ʱ
ʩ
−
1
,
w
i
)
+
ʱ
ʦ
w
p
(
z
i
|
z
w
−
i
,
s
i
(4.4)
=
0
,
w
i
,
·
)
∝
·
N
U
(
c
i
N
Z
(
z
i
)
+
T
ʱ
ʩ
−
1
)
+|
W
|
ʱ
ʦ
w
N
U
,
S
,
Z
(
u
i
,
1
,
z
i
)
+
ʱ
ʩ
−
1
N
U
,
S
(
u
i
,
1
)
+
T
ʱ
ʩ
−
1
N
Z
,
W
(
z
i
,
w
i
)
+
ʱ
ʦ
w
p
(
z
i
|
z
w
−
i
,
s
i
(4.5)
=
1
,
w
i
,
·
)
∝
·
N
Z
(
z
i
)
+|
W
|
ʱ
ʦ
w
where
u
i
denotes the user to which the
i
th word belongs,
z
i
denotes the topic
assignment of the
i
th word,
,ʱ
ʻ
,ʱ
ʳ
are symmetric hyperparameters
controlling the corresponding Dirichlet prior distributions.
N
ʱ
ʩ
,ʱ
ʦ
,ʱ
ʦ
w
v
stores the number
of samples satisfying certain requirements during the iterative sampling process.
For example,
N
U
,
C
,
S
,
Z
(
(
·
)
u
i
,
c
i
,
z
i
)
represents the number of tag words for user
u
i
which are supposed to be influenced by contact user
c
i
and generated from
topic
z
i
. The update rules for variables concerning visual descriptors are similar and
omitted here.
0
,
4.3.4 Parameter Estimation
The above sampling process repeats until the Gibbs sampler converges, andwe obtain
outputs by counting the sampled variables of
s
i
s
i
,
c
i
,
c
i
,
z
i
,
z
i
. Topic-word and
,
v
, which represent the learned topic space,
can be easily computed from sampled topic assignments
z
i
,
w
topic-visual descriptor distributions
ʦ
,ʦ
z
i
. Since
w
t
j
actually
measures the probability of the
j
th tag word in the
t
th topic, it can be estimated by
normalizing the counter
N
Z
,
W
(
·
)
ʦ
,
v
. Therefore, we have:
. It is similar to
ʦ
N
Z
,
W
(
N
Z
,
V
(
Z
t
,
w
j
)
+
ʱ
ʦ
Z
t
,
v
j
)
+
ʱ
ʦ
w
v
w
t
v
t
ʦ
=
,
ʦ
=
(4.6)
,
j
,
j
N
Z
(
N
Z
(
Z
t
)
+|
W
|
ʱ
ʦ
Z
t
)
+|
V
|
ʱ
ʦ
w
v
where
Z
t
denotes the
t
th topic, which is different from the topic assignment variables
z
i
and
z
i
. The node topic distribution for the
m
th user
U
m
can be computed by:
N
U
,
S
,
Z
(
N
U
,
S
,
Z
(
U
m
,
,
Z
t
)
+
U
m
,
,
Z
t
)
+
ʱ
ʩ
1
1
ʩ
m
,
t
=
(4.7)
N
U
,
S
(
N
U
,
S
(
U
m
,
1
)
+
U
m
,
1
)
+
T
ʱ
ʩ