Information Technology Reference
In-Depth Information
The Recommendation Model. Based on the learned model P , we define the
learning-oriented recommender (LoR). For each user with history log s
∈Q ,we
want to recommend a set R ( s )
ↆQ
of N questions that satisfies the following:
N
arg max
q∈Q and q∈s
R ( s )=
P ( q
|
s )
(9)
where arg max N returns the first N maximal arguments with respect to the given
function. In other words, the learning oriented recommender tries to recommend
the first N best questions that maximize the user's utility. In this case, the utility
is dependent on the learned model P .
P is learned according to a probabilistic graphical model based on hidden
VLMCs: one with hidden states T and then, for each ˄
a VLMC with hidden
states L . The observation states are given by Q over the question space
∈T
Q
(see
Figure 4 ).
Fig. 4. The learning-oriented recommender
To learn such a model, first, the training sequences are projected on the topic
space using p ˄ and a VLMC over T is trained on them. As a result, the transition
model P ( T ( t +1)
T (1: t ) ) is obtained.
Then, for each topic ˄ , a transition probability P ˄ ( L ( t +1)
|
L (1: t ) ) is learned
by training a VLMC over L on the projections of the question sub-sequences
within topic ˄ , using the learning objective projection function p ˆ .
We define the observation model P ( Q ( t +1)
|
T ( t +1) ,L ( t +1) ,Q (1: t ) ) as the prob-
ability of randomly sampling an unvisited question corresponding to topic
T ( t +1) = ˄ t +1 and learning objective L ( t +1) = ˆ t +1
|
˄ t +1 t +1 ,q 1 )= 0
if
( q t +1 t +1 )
∈M ˄
( q t +1 t +1 )
∈M ˆ
P ( q t +1 |
,
1
S
otherwise
(10)
where S =
{
q ∈Q\{
q 1 ,...,q t }|
( q t +1 )
∈M ˄
( q t +1 )
∈M ˆ }
.
Search WWH ::




Custom Search