Information Technology Reference
In-Depth Information
Intuition: Question sequences are first influenced by the underlying topics or
subject and the order in which these topics are tackled, and then, within each
topic, by a particular order of learning objectives. In other words, users tend
to ask questions grouped by topics; in a particular order given by the question
learning objectives (see Figure 3 ).
Fig. 3. Intuition behind the user learning process
This intuition emerged during the evaluation process, where several proba-
bilistic recommendation models were constructed and tested. The results showed
that the model based on this intuition performed better than the rest of them.
Due to the limited space, only three of the most relevant models will be consid-
ered here for comparison.
Preliminaries. Let Q, T and L be random variables taking values in the ques-
tion set
Q
, the topic space
T
and the set of learning objectives
L
, respectively.
to be the history database which contains, for each user, an
ordered sequence of questions representing the user's history log.
A learner is given a training set (usually a subset of the history database
Consider
H
H
) of question sequences q 1 = q 1 q 2 ...q n , where q i ∈Q
and q i q i +1 means that
question q i was asked before question q i +1 .
Given this training set, our goal is to learn a model P that provides a prob-
ability assignment for any future outcome given some past. More specifically,
given a context of previously asked questions s
∈Q (i.e. an ordered sequence of
the user's past question selections) and a question q , the learner should generate
a conditional probability distribution P ( q
s ).
We measure the prediction performance using the average log-loss [ 6 ] l ( P, x 1 )
of P with respect to a test sequence x 1 = x 1 x 2 ...x t :
|
t
1
t
l ( P, x 1 )=
log( P ( x i |
x 1 ...x i− 1 ))
(8)
i =1
where x i represent questions in
Q
. The average log-loss is directly related to the
likelihood P ( x 1 )= i =1 P ( x i |
x 1 ...x i− 1 ) and, therefore, minimizing the average
log-loss is equivalent to maximizing the likelihood.
Search WWH ::




Custom Search