Information Technology Reference
In-Depth Information
dependencies with our topic taxonomy, the Wordnet [ 18 ] lexical database was
used instead
Learning Utility. The learning utility refers to the learning gain of a user from
a recommendation. One way of measuring learning utility is with user ratings.
Since such an experiment can only be performed within a user study setting, a
comparative metric is introduced instead that shows how good a model reflects
the user learning process.
Consider two sets of equal size: S learn a set of user question sequences based
on the user's learning process (like the ones collected during our experiment)
and S rand a set of randomly generated question sequences. Each of the sequence
pairs from S learn ×
S rand , corresponding to the same user, have the same length.
Now let M be a recommendation model. We train this model with each of the
two sequence sets using cross-validation and obtain the accuracy values:
a learn = acc ( M, S learn )and a rand = acc ( M, S rand ) .
(14)
We define the learning utility of model P by comparing the normalized accu-
racy difference:
lu ( P, S learn ,S rand )== 0
if a learn =0
.
(15)
a learn
a learn −a rand
otherwise
This measure works only under the assumption that the set S learn truly
reflects the users' learning process. It shows how dependent model P is on receiv-
ing as input learning sequences.
3.4 Results
In the first part, an analysis of the survey results was made, in order to have
an overview of the generated sequences and to identify early patterns and cor-
relations between user answers. The results show that, in some cases, the users
strongly agree on a particular question sequence, yet in other cases major dis-
crepancies were identified (see Figure 6 ). This can be explained by the unique
and personal way humans understand certain concepts, i.e. the unique concep-
tual world map existing in each human mind. Additionally, some domain-specific
questions are rather ambiguous and up for interpretation. The survey also cap-
tures user preferences and personal opinions and, therefore, there are no unani-
mous answers. For our evaluation purposes, this aspect was preferred over highly
correlated question sequences, because it reflects real life situations. Hence, the
learned models are not highly accurate, but despite the conflicting user opinions,
some of them still proved to identify learning process patterns and use them to
make useful recommendations.
In order to show the benefits of the learning-oriented recommender, two
other models were considered: a simple recommender (SR) using a VLMC of
random variable Q over the question space and a random recommender (RR)
Search WWH ::




Custom Search