Information Technology Reference
In-Depth Information
Li et al. [ 38 ] showed that this methodology yields unbiased results as long as we dis-
regard recommendations which have not been shown to users. We cause the offline
evaluation to fine-tune our methods and obtain better strategies for exploration.
6.7 Discussion
In this section, we summarize our findings and provide an outlook to future research
directions. Suggesting relevant news articles to visitors represents a major challenge
to online news portals. In particular, as systems typically have to deal with insufficient
information about users preferences. Most users refrain from interacting with plenty
news articles but focus their attention on smaller subsets. In addition, a stream of new
articles continuously enters the portals' collections. This blurs relations between vis-
itors and articles. Established recommendation algorithms generally assume rather
static preferences. Thus, news portals had to come up with novel methods to support
visitors as they seek for relevant news. Portals use to implement various recom-
mendation algorithms in order to cover plenty of aspects reflecting different facets
of relevancy. Combinations of these algorithms serve visitors with recommended
readings. They consider factors including context, popularity, recency, and more.
Barriers between academia and research impede further improving the algorithmic
performance. Companies avoid publishing data. On the one hand, they may fear
privacy issues. On the other hand, they consider their data as asset to their com-
pany which they seek to preserve. Conversely, academia generates ideas on how to
provide better suggestions. Although, they struggle to evaluate their approaches due
to lacking data. Recently, the company plista constructed the “Open Recommenda-
tion Platform” (ORP). The platform provides researches access to an actual news
recommendation system. Plista expects to improve their recommendation quality.
Researchers get the chance to evaluate their ideas with the feedback of actual users.
Simultaneously, research faces the technical requirements of a large-scale content
provider. A large volume of requests has to be handled at high rates. The system
grants as much as 100 ms to send the list of recommended items. Researchers who
manage to overcome these restrictions have the unique opportunity to evaluate on
a large scale. Millions of users request news article recommendation through ORP.
Evaluation concentrates on the click-through-rate (CTR). Other evaluation criteria
require graded feedback. For instance, root mean squared error (RMSE; evaluation
criteria of the Netflix Prize ) requires numerically expressed preferences. Users read-
ing news online tend to express their preferences by selection at most.
We identify various directions for future research. We admit that the CTR might
not fully capture user preferences. Users may accidentally click on recommenda-
tions. Other may immediately abandon the recommended item. Conversely, users
may not click on recommendations as they did not perceive them. For instance, rec-
ommendations placed on the bottom of the web page require users to scroll down
to be seen. Future research may enrich evaluation with additional factors such as
dwelling times. Detecting hidden patterns in interactions represents another future
Search WWH ::




Custom Search